Test Report: Docker_Linux_crio 21833

                    
                      839ba12bf3f470fdbddc75955152cc8402fc5889:2025-11-01:42154
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.26
35 TestAddons/parallel/Registry 15.1
36 TestAddons/parallel/RegistryCreds 0.43
37 TestAddons/parallel/Ingress 147.89
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 5.39
41 TestAddons/parallel/CSI 26.56
42 TestAddons/parallel/Headlamp 2.7
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 13.18
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 5.27
47 TestAddons/parallel/AmdGpuDevicePlugin 6.25
97 TestFunctional/parallel/ServiceCmdConnect 603
114 TestFunctional/parallel/ServiceCmd/DeployApp 600.67
140 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.98
141 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.96
142 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.74
143 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.36
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
153 TestFunctional/parallel/ServiceCmd/Format 0.55
154 TestFunctional/parallel/ServiceCmd/URL 0.55
191 TestJSONOutput/pause/Command 1.68
197 TestJSONOutput/unpause/Command 1.77
285 TestPause/serial/Pause 7.97
347 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 4.08
352 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.55
356 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.38
357 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.73
370 TestStartStop/group/old-k8s-version/serial/Pause 8.03
373 TestStartStop/group/no-preload/serial/Pause 8.29
380 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.18
383 TestStartStop/group/embed-certs/serial/Pause 5.54
386 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.37
392 TestStartStop/group/newest-cni/serial/Pause 5.24
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable volcano --alsologtostderr -v=1: exit status 11 (261.881965ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:13.927361  117844 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:13.927652  117844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:13.927662  117844 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:13.927666  117844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:13.927956  117844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:13.928276  117844 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:13.928661  117844 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:13.928680  117844 addons.go:607] checking whether the cluster is paused
	I1101 08:58:13.928780  117844 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:13.928806  117844 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:13.929258  117844 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:13.948588  117844 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:13.948647  117844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:13.967368  117844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:14.068512  117844 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:14.068614  117844 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:14.098234  117844 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:14.098269  117844 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:14.098273  117844 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:14.098279  117844 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:14.098283  117844 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:14.098297  117844 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:14.098302  117844 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:14.098306  117844 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:14.098310  117844 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:14.098323  117844 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:14.098331  117844 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:14.098336  117844 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:14.098343  117844 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:14.098347  117844 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:14.098354  117844 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:14.098361  117844 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:14.098367  117844 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:14.098373  117844 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:14.098377  117844 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:14.098381  117844 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:14.098385  117844 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:14.098389  117844 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:14.098397  117844 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:14.098401  117844 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:14.098405  117844 cri.go:89] found id: ""
	I1101 08:58:14.098459  117844 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:14.114561  117844 out.go:203] 
	W1101 08:58:14.115903  117844 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:14.115952  117844 out.go:285] * 
	* 
	W1101 08:58:14.119349  117844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:14.120786  117844 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.1s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.391939ms
I1101 08:58:24.971271  107955 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1101 08:58:24.971298  107955 kapi.go:107] duration metric: took 3.268423ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-785wk" [48d54e24-0425-4f8e-b67b-dc0f16dbcccc] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003599371s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-497v5" [3193b72b-c812-4490-b737-26cd9e00a032] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00321346s
addons_test.go:392: (dbg) Run:  kubectl --context addons-993117 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-993117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-993117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.591101215s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable registry --alsologtostderr -v=1: exit status 11 (276.926717ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:39.855234  119962 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:39.855524  119962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:39.855536  119962 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:39.855540  119962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:39.855793  119962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:39.856162  119962 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:39.856665  119962 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:39.856688  119962 addons.go:607] checking whether the cluster is paused
	I1101 08:58:39.856791  119962 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:39.856804  119962 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:39.857206  119962 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:39.880772  119962 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:39.880837  119962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:39.904337  119962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:40.007741  119962 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:40.007840  119962 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:40.037871  119962 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:40.037922  119962 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:40.037931  119962 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:40.037936  119962 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:40.037941  119962 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:40.037945  119962 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:40.037948  119962 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:40.037951  119962 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:40.037955  119962 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:40.037963  119962 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:40.037967  119962 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:40.037970  119962 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:40.037975  119962 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:40.037978  119962 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:40.037982  119962 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:40.037988  119962 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:40.037998  119962 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:40.038003  119962 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:40.038007  119962 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:40.038011  119962 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:40.038015  119962 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:40.038019  119962 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:40.038023  119962 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:40.038027  119962 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:40.038031  119962 cri.go:89] found id: ""
	I1101 08:58:40.038076  119962 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:40.052459  119962 out.go:203] 
	W1101 08:58:40.054112  119962 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:40.054132  119962 out.go:285] * 
	* 
	W1101 08:58:40.057278  119962 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:40.058870  119962 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.10s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.323238ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-993117
addons_test.go:332: (dbg) Run:  kubectl --context addons-993117 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (256.442196ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:49.036419  121430 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:49.036550  121430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:49.036569  121430 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:49.036575  121430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:49.036828  121430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:49.037129  121430 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:49.037472  121430 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:49.037500  121430 addons.go:607] checking whether the cluster is paused
	I1101 08:58:49.037614  121430 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:49.037637  121430 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:49.038070  121430 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:49.056262  121430 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:49.056343  121430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:49.074466  121430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:49.173966  121430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:49.174047  121430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:49.204288  121430 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:49.204317  121430 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:49.204324  121430 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:49.204329  121430 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:49.204354  121430 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:49.204358  121430 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:49.204360  121430 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:49.204363  121430 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:49.204366  121430 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:49.204372  121430 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:49.204377  121430 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:49.204380  121430 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:49.204383  121430 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:49.204385  121430 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:49.204388  121430 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:49.204400  121430 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:49.204408  121430 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:49.204411  121430 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:49.204414  121430 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:49.204417  121430 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:49.204422  121430 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:49.204424  121430 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:49.204426  121430 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:49.204429  121430 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:49.204431  121430 cri.go:89] found id: ""
	I1101 08:58:49.204469  121430 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:49.218443  121430 out.go:203] 
	W1101 08:58:49.219827  121430 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:49.219855  121430 out.go:285] * 
	* 
	W1101 08:58:49.222950  121430 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:49.224126  121430 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-993117 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-993117 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-993117 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [ce647f03-4892-49f6-9923-ec10d66fd781] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [ce647f03-4892-49f6-9923-ec10d66fd781] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002766233s
I1101 08:58:50.518877  107955 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.249772734s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-993117 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-993117
helpers_test.go:243: (dbg) docker inspect addons-993117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496",
	        "Created": "2025-11-01T08:55:53.852267328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 109978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T08:55:53.895628926Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496/hostname",
	        "HostsPath": "/var/lib/docker/containers/d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496/hosts",
	        "LogPath": "/var/lib/docker/containers/d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496/d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496-json.log",
	        "Name": "/addons-993117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-993117:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-993117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496",
	                "LowerDir": "/var/lib/docker/overlay2/af5c0e70df95d6a75973586a74737e4442c6f0678defcfe4d83d43df8f4390b2-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af5c0e70df95d6a75973586a74737e4442c6f0678defcfe4d83d43df8f4390b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af5c0e70df95d6a75973586a74737e4442c6f0678defcfe4d83d43df8f4390b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af5c0e70df95d6a75973586a74737e4442c6f0678defcfe4d83d43df8f4390b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-993117",
	                "Source": "/var/lib/docker/volumes/addons-993117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-993117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-993117",
	                "name.minikube.sigs.k8s.io": "addons-993117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f59be55a36e79acb763b4b6bca255d53a6a1ad9a75ad0e25ed66f87587a6a830",
	            "SandboxKey": "/var/run/docker/netns/f59be55a36e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-993117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:cf:a3:0f:b1:aa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45cd133f52f80206aac8969a8f74f81258b1da37ef0e39e860a4b8aff91aaab7",
	                    "EndpointID": "0b98a5a37bde8d1a251ed5051bbf98f524bc98ecec24e832d99989cc2c032807",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-993117",
	                        "d9e4415568e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-993117 -n addons-993117
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-993117 logs -n 25: (1.196931315s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-394939 --alsologtostderr --binary-mirror http://127.0.0.1:41257 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-394939 │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │                     │
	│ delete  │ -p binary-mirror-394939                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-394939 │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:55 UTC │
	│ addons  │ disable dashboard -p addons-993117                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │                     │
	│ addons  │ enable dashboard -p addons-993117                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │                     │
	│ start   │ -p addons-993117 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:58 UTC │
	│ addons  │ addons-993117 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ addons-993117 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ enable headlamp -p addons-993117 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ addons-993117 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ addons-993117 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ addons-993117 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ addons-993117 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ addons-993117 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ ip      │ addons-993117 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │ 01 Nov 25 08:58 UTC │
	│ addons  │ addons-993117 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ addons-993117 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ ssh     │ addons-993117 ssh cat /opt/local-path-provisioner/pvc-0365a22a-6c12-401f-8fad-405ba975828f_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │ 01 Nov 25 08:58 UTC │
	│ addons  │ addons-993117 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-993117                                                                                                                                                                                                                                                                                                                                                                                           │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │ 01 Nov 25 08:58 UTC │
	│ addons  │ addons-993117 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ addons-993117 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ ssh     │ addons-993117 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ addons-993117 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ addons-993117 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ ip      │ addons-993117 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-993117        │ jenkins │ v1.37.0 │ 01 Nov 25 09:01 UTC │ 01 Nov 25 09:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:55:32
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:55:32.548812  109339 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:55:32.549104  109339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:55:32.549115  109339 out.go:374] Setting ErrFile to fd 2...
	I1101 08:55:32.549119  109339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:55:32.549340  109339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:55:32.549849  109339 out.go:368] Setting JSON to false
	I1101 08:55:32.550794  109339 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2271,"bootTime":1761985062,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:55:32.550898  109339 start.go:143] virtualization: kvm guest
	I1101 08:55:32.552588  109339 out.go:179] * [addons-993117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:55:32.553717  109339 notify.go:221] Checking for updates...
	I1101 08:55:32.553766  109339 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 08:55:32.554800  109339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:55:32.555942  109339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 08:55:32.557139  109339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 08:55:32.558247  109339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:55:32.559237  109339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:55:32.560357  109339 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:55:32.583557  109339 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 08:55:32.583728  109339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:55:32.643823  109339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-01 08:55:32.632728071 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:55:32.643942  109339 docker.go:319] overlay module found
	I1101 08:55:32.645813  109339 out.go:179] * Using the docker driver based on user configuration
	I1101 08:55:32.647067  109339 start.go:309] selected driver: docker
	I1101 08:55:32.647086  109339 start.go:930] validating driver "docker" against <nil>
	I1101 08:55:32.647097  109339 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:55:32.647606  109339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:55:32.709308  109339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-01 08:55:32.698413994 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:55:32.709477  109339 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:55:32.709675  109339 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:55:32.711307  109339 out.go:179] * Using Docker driver with root privileges
	I1101 08:55:32.714031  109339 cni.go:84] Creating CNI manager for ""
	I1101 08:55:32.714119  109339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:55:32.714135  109339 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 08:55:32.714258  109339 start.go:353] cluster config:
	{Name:addons-993117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-993117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 08:55:32.715607  109339 out.go:179] * Starting "addons-993117" primary control-plane node in "addons-993117" cluster
	I1101 08:55:32.716636  109339 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 08:55:32.717887  109339 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 08:55:32.718837  109339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:55:32.718877  109339 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 08:55:32.718893  109339 cache.go:59] Caching tarball of preloaded images
	I1101 08:55:32.718926  109339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 08:55:32.719007  109339 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 08:55:32.719024  109339 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:55:32.719429  109339 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/config.json ...
	I1101 08:55:32.719462  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/config.json: {Name:mk7fb1382f374dec11d4a262e2754219dc35c482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:32.735898  109339 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:55:32.736045  109339 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 08:55:32.736068  109339 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 08:55:32.736074  109339 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 08:55:32.736087  109339 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 08:55:32.736095  109339 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 08:55:45.495609  109339 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 08:55:45.495660  109339 cache.go:233] Successfully downloaded all kic artifacts
	I1101 08:55:45.495705  109339 start.go:360] acquireMachinesLock for addons-993117: {Name:mkba6252113cec7e55aec81713c4f8d8e7b23cec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:55:45.495820  109339 start.go:364] duration metric: took 90.983µs to acquireMachinesLock for "addons-993117"
	I1101 08:55:45.495854  109339 start.go:93] Provisioning new machine with config: &{Name:addons-993117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-993117 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:55:45.495965  109339 start.go:125] createHost starting for "" (driver="docker")
	I1101 08:55:45.497661  109339 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 08:55:45.497885  109339 start.go:159] libmachine.API.Create for "addons-993117" (driver="docker")
	I1101 08:55:45.497926  109339 client.go:173] LocalClient.Create starting
	I1101 08:55:45.498037  109339 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem
	I1101 08:55:45.883103  109339 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem
	I1101 08:55:46.075456  109339 cli_runner.go:164] Run: docker network inspect addons-993117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 08:55:46.092641  109339 cli_runner.go:211] docker network inspect addons-993117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 08:55:46.092716  109339 network_create.go:284] running [docker network inspect addons-993117] to gather additional debugging logs...
	I1101 08:55:46.092734  109339 cli_runner.go:164] Run: docker network inspect addons-993117
	W1101 08:55:46.110331  109339 cli_runner.go:211] docker network inspect addons-993117 returned with exit code 1
	I1101 08:55:46.110363  109339 network_create.go:287] error running [docker network inspect addons-993117]: docker network inspect addons-993117: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-993117 not found
	I1101 08:55:46.110387  109339 network_create.go:289] output of [docker network inspect addons-993117]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-993117 not found
	
	** /stderr **
	I1101 08:55:46.110492  109339 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:55:46.128565  109339 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024a8830}
	I1101 08:55:46.128604  109339 network_create.go:124] attempt to create docker network addons-993117 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 08:55:46.128666  109339 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-993117 addons-993117
	I1101 08:55:46.187525  109339 network_create.go:108] docker network addons-993117 192.168.49.0/24 created
	I1101 08:55:46.187558  109339 kic.go:121] calculated static IP "192.168.49.2" for the "addons-993117" container
	I1101 08:55:46.187625  109339 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 08:55:46.204134  109339 cli_runner.go:164] Run: docker volume create addons-993117 --label name.minikube.sigs.k8s.io=addons-993117 --label created_by.minikube.sigs.k8s.io=true
	I1101 08:55:46.223136  109339 oci.go:103] Successfully created a docker volume addons-993117
	I1101 08:55:46.223220  109339 cli_runner.go:164] Run: docker run --rm --name addons-993117-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-993117 --entrypoint /usr/bin/test -v addons-993117:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 08:55:49.524867  109339 cli_runner.go:217] Completed: docker run --rm --name addons-993117-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-993117 --entrypoint /usr/bin/test -v addons-993117:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (3.301598701s)
	I1101 08:55:49.524900  109339 oci.go:107] Successfully prepared a docker volume addons-993117
	I1101 08:55:49.524945  109339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:55:49.524972  109339 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 08:55:49.525048  109339 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-993117:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 08:55:53.776330  109339 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-993117:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.251239922s)
	I1101 08:55:53.776368  109339 kic.go:203] duration metric: took 4.251390537s to extract preloaded images to volume ...
	W1101 08:55:53.776462  109339 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 08:55:53.776497  109339 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 08:55:53.776536  109339 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 08:55:53.835129  109339 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-993117 --name addons-993117 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-993117 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-993117 --network addons-993117 --ip 192.168.49.2 --volume addons-993117:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 08:55:54.139942  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Running}}
	I1101 08:55:54.158837  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:55:54.176287  109339 cli_runner.go:164] Run: docker exec addons-993117 stat /var/lib/dpkg/alternatives/iptables
	I1101 08:55:54.218769  109339 oci.go:144] the created container "addons-993117" has a running status.
	I1101 08:55:54.218802  109339 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa...
	I1101 08:55:54.331223  109339 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 08:55:54.356976  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:55:54.379027  109339 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 08:55:54.379054  109339 kic_runner.go:114] Args: [docker exec --privileged addons-993117 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 08:55:54.421059  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:55:54.447107  109339 machine.go:94] provisionDockerMachine start ...
	I1101 08:55:54.447234  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:54.470450  109339 main.go:143] libmachine: Using SSH client type: native
	I1101 08:55:54.470773  109339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:55:54.470795  109339 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:55:54.617800  109339 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-993117
	
	I1101 08:55:54.617850  109339 ubuntu.go:182] provisioning hostname "addons-993117"
	I1101 08:55:54.617928  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:54.637045  109339 main.go:143] libmachine: Using SSH client type: native
	I1101 08:55:54.637264  109339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:55:54.637278  109339 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-993117 && echo "addons-993117" | sudo tee /etc/hostname
	I1101 08:55:54.788367  109339 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-993117
	
	I1101 08:55:54.788450  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:54.808243  109339 main.go:143] libmachine: Using SSH client type: native
	I1101 08:55:54.808546  109339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:55:54.808575  109339 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-993117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-993117/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-993117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:55:54.952950  109339 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:55:54.952986  109339 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 08:55:54.953032  109339 ubuntu.go:190] setting up certificates
	I1101 08:55:54.953045  109339 provision.go:84] configureAuth start
	I1101 08:55:54.953104  109339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-993117
	I1101 08:55:54.970715  109339 provision.go:143] copyHostCerts
	I1101 08:55:54.970784  109339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 08:55:54.970893  109339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 08:55:54.970991  109339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 08:55:54.971052  109339 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.addons-993117 san=[127.0.0.1 192.168.49.2 addons-993117 localhost minikube]
	I1101 08:55:55.676163  109339 provision.go:177] copyRemoteCerts
	I1101 08:55:55.676225  109339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:55:55.676260  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:55.694218  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:55:55.795276  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 08:55:55.814237  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:55:55.831285  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 08:55:55.848890  109339 provision.go:87] duration metric: took 895.830777ms to configureAuth
	I1101 08:55:55.848940  109339 ubuntu.go:206] setting minikube options for container-runtime
	I1101 08:55:55.849104  109339 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:55:55.849203  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:55.867410  109339 main.go:143] libmachine: Using SSH client type: native
	I1101 08:55:55.867637  109339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:55:55.867656  109339 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:55:56.120652  109339 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:55:56.120675  109339 machine.go:97] duration metric: took 1.673534029s to provisionDockerMachine
	I1101 08:55:56.120686  109339 client.go:176] duration metric: took 10.622750185s to LocalClient.Create
	I1101 08:55:56.120705  109339 start.go:167] duration metric: took 10.622822454s to libmachine.API.Create "addons-993117"
	I1101 08:55:56.120711  109339 start.go:293] postStartSetup for "addons-993117" (driver="docker")
	I1101 08:55:56.120721  109339 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:55:56.120793  109339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:55:56.120842  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:56.138699  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:55:56.240668  109339 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:55:56.244416  109339 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 08:55:56.244440  109339 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 08:55:56.244456  109339 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 08:55:56.244528  109339 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 08:55:56.244556  109339 start.go:296] duration metric: took 123.838767ms for postStartSetup
	I1101 08:55:56.244853  109339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-993117
	I1101 08:55:56.261843  109339 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/config.json ...
	I1101 08:55:56.262115  109339 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:55:56.262159  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:56.279569  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:55:56.376609  109339 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 08:55:56.381149  109339 start.go:128] duration metric: took 10.885168616s to createHost
	I1101 08:55:56.381173  109339 start.go:83] releasing machines lock for "addons-993117", held for 10.885336124s
	I1101 08:55:56.381250  109339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-993117
	I1101 08:55:56.398440  109339 ssh_runner.go:195] Run: cat /version.json
	I1101 08:55:56.398489  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:56.398510  109339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:55:56.398596  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:56.416778  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:55:56.416983  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:55:56.567578  109339 ssh_runner.go:195] Run: systemctl --version
	I1101 08:55:56.574026  109339 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:55:56.608789  109339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:55:56.613704  109339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:55:56.613758  109339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:55:56.640216  109339 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 08:55:56.640237  109339 start.go:496] detecting cgroup driver to use...
	I1101 08:55:56.640268  109339 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 08:55:56.640306  109339 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:55:56.656181  109339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:55:56.669399  109339 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:55:56.669457  109339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:55:56.686093  109339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:55:56.704236  109339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:55:56.788975  109339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:55:56.876182  109339 docker.go:234] disabling docker service ...
	I1101 08:55:56.876252  109339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:55:56.895382  109339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:55:56.907775  109339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:55:56.991686  109339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:55:57.072407  109339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:55:57.085082  109339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:55:57.099153  109339 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:55:57.099212  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.109844  109339 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 08:55:57.109942  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.118858  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.127480  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.136400  109339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:55:57.144552  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.153193  109339 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.167013  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.175736  109339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:55:57.183513  109339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:55:57.191293  109339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:55:57.268236  109339 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:55:57.373121  109339 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:55:57.373199  109339 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:55:57.377273  109339 start.go:564] Will wait 60s for crictl version
	I1101 08:55:57.377333  109339 ssh_runner.go:195] Run: which crictl
	I1101 08:55:57.380973  109339 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 08:55:57.404589  109339 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 08:55:57.404671  109339 ssh_runner.go:195] Run: crio --version
	I1101 08:55:57.434048  109339 ssh_runner.go:195] Run: crio --version
	I1101 08:55:57.463251  109339 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 08:55:57.464436  109339 cli_runner.go:164] Run: docker network inspect addons-993117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:55:57.482346  109339 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 08:55:57.486490  109339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:55:57.497024  109339 kubeadm.go:884] updating cluster {Name:addons-993117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-993117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:55:57.497139  109339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:55:57.497187  109339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:55:57.530509  109339 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:55:57.530530  109339 crio.go:433] Images already preloaded, skipping extraction
	I1101 08:55:57.530575  109339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:55:57.556161  109339 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:55:57.556183  109339 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:55:57.556192  109339 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 08:55:57.556279  109339 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-993117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-993117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:55:57.556340  109339 ssh_runner.go:195] Run: crio config
	I1101 08:55:57.600523  109339 cni.go:84] Creating CNI manager for ""
	I1101 08:55:57.600544  109339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:55:57.600558  109339 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:55:57.600586  109339 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-993117 NodeName:addons-993117 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:55:57.600804  109339 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-993117"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:55:57.600881  109339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:55:57.609081  109339 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:55:57.609162  109339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:55:57.617309  109339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 08:55:57.630535  109339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:55:57.645812  109339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 08:55:57.658699  109339 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 08:55:57.662479  109339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:55:57.672963  109339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:55:57.752686  109339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:55:57.778854  109339 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117 for IP: 192.168.49.2
	I1101 08:55:57.778875  109339 certs.go:195] generating shared ca certs ...
	I1101 08:55:57.778898  109339 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:57.779041  109339 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 08:55:57.911935  109339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt ...
	I1101 08:55:57.911972  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt: {Name:mk8da0f06e8b560623b0b57274ff3cad3668f0e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:57.912175  109339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key ...
	I1101 08:55:57.912188  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key: {Name:mkbe4b7e166b5cfbcf8ea62c6168fd9056b2e3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:57.912267  109339 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 08:55:58.146928  109339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt ...
	I1101 08:55:58.146963  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt: {Name:mk8c15fe379a589af8cda80c274386f0bd2927a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.147147  109339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key ...
	I1101 08:55:58.147159  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key: {Name:mk701633aa810a9fbee56cdd65787d539763830b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.147236  109339 certs.go:257] generating profile certs ...
	I1101 08:55:58.147300  109339 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.key
	I1101 08:55:58.147313  109339 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt with IP's: []
	I1101 08:55:58.418736  109339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt ...
	I1101 08:55:58.418772  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: {Name:mk27b8a3fc9889b3dd3cc67551cb7036fe84c509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.418974  109339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.key ...
	I1101 08:55:58.418988  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.key: {Name:mk613e2d496dbfd1ae4d809fe3dbd7ff2f66063c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.419082  109339 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key.2272afb2
	I1101 08:55:58.419106  109339 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt.2272afb2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 08:55:58.576667  109339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt.2272afb2 ...
	I1101 08:55:58.576703  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt.2272afb2: {Name:mkd5180b2eaa5b25dc89f2ecbf3d185e57d7f5c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.576882  109339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key.2272afb2 ...
	I1101 08:55:58.576896  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key.2272afb2: {Name:mka9620dde79f2b51f059a80ca0cc74f82891745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.576981  109339 certs.go:382] copying /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt.2272afb2 -> /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt
	I1101 08:55:58.577063  109339 certs.go:386] copying /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key.2272afb2 -> /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key
	I1101 08:55:58.577117  109339 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.key
	I1101 08:55:58.577137  109339 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.crt with IP's: []
	I1101 08:55:58.660197  109339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.crt ...
	I1101 08:55:58.660229  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.crt: {Name:mk740bdd7ee89526666c36fdfaf7b64d1105174e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.660404  109339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.key ...
	I1101 08:55:58.660418  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.key: {Name:mkb32fc47490f1ea22195e9a3e4051fda68db6f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.660593  109339 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 08:55:58.660628  109339 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 08:55:58.660649  109339 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:55:58.660668  109339 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 08:55:58.661292  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:55:58.680929  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 08:55:58.699726  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:55:58.718882  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 08:55:58.737421  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:55:58.755343  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 08:55:58.773698  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:55:58.791195  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 08:55:58.808968  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:55:58.828998  109339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:55:58.842236  109339 ssh_runner.go:195] Run: openssl version
	I1101 08:55:58.848317  109339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:55:58.859505  109339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:55:58.863453  109339 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:55:58.863513  109339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:55:58.898087  109339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:55:58.906971  109339 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:55:58.910997  109339 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:55:58.911051  109339 kubeadm.go:401] StartCluster: {Name:addons-993117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-993117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:55:58.911122  109339 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:55:58.911166  109339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:55:58.938532  109339 cri.go:89] found id: ""
	I1101 08:55:58.938605  109339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:55:58.946744  109339 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:55:58.954806  109339 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 08:55:58.954864  109339 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:55:58.962484  109339 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:55:58.962506  109339 kubeadm.go:158] found existing configuration files:
	
	I1101 08:55:58.962545  109339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:55:58.970036  109339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:55:58.970092  109339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:55:58.978180  109339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:55:58.985626  109339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:55:58.985788  109339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:55:58.993560  109339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:55:59.001050  109339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:55:59.001106  109339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:55:59.008439  109339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:55:59.016068  109339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:55:59.016135  109339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:55:59.023594  109339 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 08:55:59.060542  109339 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:55:59.060591  109339 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:55:59.083090  109339 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 08:55:59.083204  109339 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 08:55:59.083268  109339 kubeadm.go:319] OS: Linux
	I1101 08:55:59.083327  109339 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 08:55:59.083395  109339 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 08:55:59.083475  109339 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 08:55:59.083552  109339 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 08:55:59.083633  109339 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 08:55:59.083744  109339 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 08:55:59.083826  109339 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 08:55:59.083891  109339 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 08:55:59.142301  109339 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:55:59.142462  109339 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:55:59.142607  109339 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:55:59.151028  109339 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:55:59.153338  109339 out.go:252]   - Generating certificates and keys ...
	I1101 08:55:59.153443  109339 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:55:59.153569  109339 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:55:59.330456  109339 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:55:59.455187  109339 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:55:59.553416  109339 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:55:59.886967  109339 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:56:00.075643  109339 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:56:00.075804  109339 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-993117 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:56:00.606652  109339 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:56:00.606822  109339 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-993117 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:56:00.743553  109339 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:56:00.879200  109339 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:56:01.038748  109339 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:56:01.038824  109339 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:56:01.170839  109339 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:56:01.309080  109339 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:56:01.430203  109339 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:56:01.534382  109339 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:56:01.660318  109339 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:56:01.660764  109339 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:56:01.664665  109339 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:56:01.666168  109339 out.go:252]   - Booting up control plane ...
	I1101 08:56:01.666355  109339 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:56:01.666480  109339 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:56:01.667123  109339 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:56:01.680973  109339 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:56:01.681114  109339 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:56:01.688415  109339 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:56:01.688699  109339 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:56:01.688755  109339 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:56:01.786168  109339 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:56:01.786318  109339 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:56:03.287208  109339 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501173534s
	I1101 08:56:03.291134  109339 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:56:03.291227  109339 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 08:56:03.291311  109339 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:56:03.291399  109339 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:56:04.314456  109339 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.023227837s
	I1101 08:56:05.377779  109339 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.086605121s
	I1101 08:56:07.293368  109339 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002135105s
	I1101 08:56:07.305463  109339 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:56:07.317546  109339 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:56:07.327364  109339 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:56:07.327692  109339 kubeadm.go:319] [mark-control-plane] Marking the node addons-993117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:56:07.335856  109339 kubeadm.go:319] [bootstrap-token] Using token: xs4pqr.4gc1opr1rfh0byc9
	I1101 08:56:07.338572  109339 out.go:252]   - Configuring RBAC rules ...
	I1101 08:56:07.338710  109339 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:56:07.341965  109339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:56:07.348055  109339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:56:07.350879  109339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:56:07.353563  109339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:56:07.357855  109339 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:56:07.700122  109339 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:56:08.119213  109339 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:56:08.699456  109339 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:56:08.700349  109339 kubeadm.go:319] 
	I1101 08:56:08.700433  109339 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:56:08.700444  109339 kubeadm.go:319] 
	I1101 08:56:08.700545  109339 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:56:08.700570  109339 kubeadm.go:319] 
	I1101 08:56:08.700619  109339 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:56:08.700691  109339 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:56:08.700748  109339 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:56:08.700758  109339 kubeadm.go:319] 
	I1101 08:56:08.700843  109339 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:56:08.700852  109339 kubeadm.go:319] 
	I1101 08:56:08.700945  109339 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:56:08.700954  109339 kubeadm.go:319] 
	I1101 08:56:08.701031  109339 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:56:08.701139  109339 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:56:08.701203  109339 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:56:08.701209  109339 kubeadm.go:319] 
	I1101 08:56:08.701278  109339 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:56:08.701354  109339 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:56:08.701360  109339 kubeadm.go:319] 
	I1101 08:56:08.701433  109339 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xs4pqr.4gc1opr1rfh0byc9 \
	I1101 08:56:08.701522  109339 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 \
	I1101 08:56:08.701542  109339 kubeadm.go:319] 	--control-plane 
	I1101 08:56:08.701548  109339 kubeadm.go:319] 
	I1101 08:56:08.701665  109339 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:56:08.701680  109339 kubeadm.go:319] 
	I1101 08:56:08.701759  109339 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xs4pqr.4gc1opr1rfh0byc9 \
	I1101 08:56:08.701870  109339 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 
	I1101 08:56:08.704271  109339 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 08:56:08.704441  109339 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:56:08.704468  109339 cni.go:84] Creating CNI manager for ""
	I1101 08:56:08.704478  109339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:56:08.706289  109339 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 08:56:08.707555  109339 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 08:56:08.711705  109339 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 08:56:08.711728  109339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 08:56:08.725308  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 08:56:08.933409  109339 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:56:08.933497  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:08.933562  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-993117 minikube.k8s.io/updated_at=2025_11_01T08_56_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=addons-993117 minikube.k8s.io/primary=true
	I1101 08:56:09.027961  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:09.027960  109339 ops.go:34] apiserver oom_adj: -16
	I1101 08:56:09.528080  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:10.028465  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:10.528117  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:11.028718  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:11.528995  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:12.028884  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:12.528772  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:13.028882  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:13.529047  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:13.598421  109339 kubeadm.go:1114] duration metric: took 4.664986368s to wait for elevateKubeSystemPrivileges
	I1101 08:56:13.598455  109339 kubeadm.go:403] duration metric: took 14.687408886s to StartCluster
	I1101 08:56:13.598475  109339 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:56:13.598579  109339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 08:56:13.599018  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:56:13.599210  109339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:56:13.599222  109339 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:56:13.599300  109339 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:56:13.599431  109339 addons.go:70] Setting yakd=true in profile "addons-993117"
	I1101 08:56:13.599444  109339 addons.go:70] Setting inspektor-gadget=true in profile "addons-993117"
	I1101 08:56:13.599467  109339 addons.go:70] Setting metrics-server=true in profile "addons-993117"
	I1101 08:56:13.599477  109339 addons.go:239] Setting addon inspektor-gadget=true in "addons-993117"
	I1101 08:56:13.599482  109339 addons.go:239] Setting addon metrics-server=true in "addons-993117"
	I1101 08:56:13.599475  109339 addons.go:70] Setting default-storageclass=true in profile "addons-993117"
	I1101 08:56:13.599504  109339 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-993117"
	I1101 08:56:13.599513  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.599459  109339 addons.go:239] Setting addon yakd=true in "addons-993117"
	I1101 08:56:13.599519  109339 addons.go:70] Setting gcp-auth=true in profile "addons-993117"
	I1101 08:56:13.599535  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.599536  109339 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-993117"
	I1101 08:56:13.599543  109339 addons.go:70] Setting ingress-dns=true in profile "addons-993117"
	I1101 08:56:13.599551  109339 mustload.go:66] Loading cluster: addons-993117
	I1101 08:56:13.599562  109339 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-993117"
	I1101 08:56:13.599562  109339 addons.go:239] Setting addon ingress-dns=true in "addons-993117"
	I1101 08:56:13.599596  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.599524  109339 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:56:13.599515  109339 addons.go:70] Setting cloud-spanner=true in profile "addons-993117"
	I1101 08:56:13.600140  109339 addons.go:239] Setting addon cloud-spanner=true in "addons-993117"
	I1101 08:56:13.600173  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.600357  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.600435  109339 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-993117"
	I1101 08:56:13.600453  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.600477  109339 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-993117"
	I1101 08:56:13.600486  109339 addons.go:70] Setting registry=true in profile "addons-993117"
	I1101 08:56:13.600493  109339 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-993117"
	I1101 08:56:13.600498  109339 addons.go:239] Setting addon registry=true in "addons-993117"
	I1101 08:56:13.599498  109339 addons.go:70] Setting ingress=true in profile "addons-993117"
	I1101 08:56:13.600512  109339 addons.go:239] Setting addon ingress=true in "addons-993117"
	I1101 08:56:13.600520  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.600498  109339 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-993117"
	I1101 08:56:13.600536  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.600549  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.601099  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.601102  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.601166  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.601991  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.602551  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.603137  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.599515  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.599633  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.602091  109339 addons.go:70] Setting registry-creds=true in profile "addons-993117"
	I1101 08:56:13.603434  109339 addons.go:239] Setting addon registry-creds=true in "addons-993117"
	I1101 08:56:13.603477  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.602120  109339 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:56:13.602338  109339 addons.go:70] Setting volumesnapshots=true in profile "addons-993117"
	I1101 08:56:13.603757  109339 addons.go:239] Setting addon volumesnapshots=true in "addons-993117"
	I1101 08:56:13.603788  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.602352  109339 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-993117"
	I1101 08:56:13.603965  109339 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-993117"
	I1101 08:56:13.602362  109339 addons.go:70] Setting volcano=true in profile "addons-993117"
	I1101 08:56:13.604093  109339 addons.go:239] Setting addon volcano=true in "addons-993117"
	I1101 08:56:13.604119  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.604592  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.604654  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.600522  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.605727  109339 addons.go:70] Setting storage-provisioner=true in profile "addons-993117"
	I1101 08:56:13.605787  109339 addons.go:239] Setting addon storage-provisioner=true in "addons-993117"
	I1101 08:56:13.605838  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.606079  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.606787  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.607159  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.607297  109339 out.go:179] * Verifying Kubernetes components...
	I1101 08:56:13.609067  109339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:56:13.614389  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.614417  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.615008  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.630184  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.661338  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:56:13.663197  109339 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:56:13.663288  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:56:13.668491  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:56:13.668583  109339 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:56:13.670085  109339 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 08:56:13.671151  109339 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:56:13.671315  109339 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:56:13.671331  109339 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:56:13.671407  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.671677  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:56:13.672569  109339 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:56:13.672588  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:56:13.672642  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.673509  109339 addons.go:239] Setting addon default-storageclass=true in "addons-993117"
	I1101 08:56:13.673602  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.674291  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.674387  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:56:13.675431  109339 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:56:13.676388  109339 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:56:13.676407  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:56:13.676480  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.678032  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:56:13.679325  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:56:13.680928  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:56:13.682177  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:56:13.682199  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:56:13.682344  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.682568  109339 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:56:13.683369  109339 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:56:13.684777  109339 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:56:13.684797  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:56:13.684861  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.687355  109339 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:56:13.687791  109339 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:56:13.688518  109339 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:56:13.688539  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:56:13.688622  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.689185  109339 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:56:13.689203  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:56:13.689271  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.698012  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:56:13.701538  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:56:13.701568  109339 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:56:13.701647  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	W1101 08:56:13.703899  109339 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:56:13.707178  109339 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:56:13.707185  109339 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 08:56:13.708960  109339 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 08:56:13.708985  109339 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:56:13.709079  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.709900  109339 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:56:13.710005  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:56:13.710082  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.715957  109339 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:56:13.717373  109339 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:56:13.717414  109339 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:56:13.717484  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.729311  109339 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:56:13.731532  109339 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-993117"
	I1101 08:56:13.731590  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.731758  109339 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:56:13.731775  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:56:13.731839  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.732123  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.736073  109339 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:56:13.738376  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.738672  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.739798  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.743495  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.743557  109339 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:56:13.743573  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:56:13.743635  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.744938  109339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:56:13.754783  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.770265  109339 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:56:13.770293  109339 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:56:13.770350  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.773371  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.774071  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.775096  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.784370  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.792429  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.796002  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.805959  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.809493  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	W1101 08:56:13.809888  109339 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:56:13.810342  109339 retry.go:31] will retry after 234.782741ms: ssh: handshake failed: EOF
	I1101 08:56:13.810887  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.811558  109339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1101 08:56:13.812069  109339 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:56:13.812128  109339 retry.go:31] will retry after 225.126126ms: ssh: handshake failed: EOF
	I1101 08:56:13.812702  109339 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:56:13.814017  109339 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:56:13.815290  109339 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:56:13.815313  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:56:13.815373  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.820748  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	W1101 08:56:13.821758  109339 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:56:13.821784  109339 retry.go:31] will retry after 178.188905ms: ssh: handshake failed: EOF
	I1101 08:56:13.849547  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.909435  109339 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:56:13.909464  109339 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:56:13.919257  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:56:13.922656  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:56:13.926192  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:56:13.928818  109339 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:56:13.928836  109339 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:56:13.930297  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:56:13.930316  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:56:13.941188  109339 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:56:13.941225  109339 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:56:13.952645  109339 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:56:13.952673  109339 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:56:13.962322  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:56:13.962346  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:56:13.962538  109339 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:13.962551  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:56:13.967825  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:56:13.968968  109339 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:56:13.969033  109339 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:56:13.973025  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:56:13.973813  109339 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:56:13.973862  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:56:13.984654  109339 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:56:13.984681  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:56:13.991422  109339 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:56:13.991513  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:56:14.002557  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:14.003169  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:56:14.007789  109339 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:56:14.007821  109339 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:56:14.012318  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:56:14.015942  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:56:14.015969  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:56:14.021516  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:56:14.034314  109339 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:56:14.034351  109339 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:56:14.052112  109339 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:56:14.052144  109339 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:56:14.076115  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:56:14.076158  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:56:14.111123  109339 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 08:56:14.113069  109339 node_ready.go:35] waiting up to 6m0s for node "addons-993117" to be "Ready" ...
	I1101 08:56:14.115133  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:56:14.115156  109339 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:56:14.120067  109339 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:56:14.120094  109339 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:56:14.159307  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:56:14.159353  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:56:14.176642  109339 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:56:14.176668  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:56:14.195853  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:56:14.234076  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:56:14.234125  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:56:14.247219  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:56:14.252931  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:56:14.262765  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:56:14.281960  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:56:14.282007  109339 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:56:14.336374  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:56:14.383112  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:56:14.383145  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:56:14.430576  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:56:14.430606  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:56:14.489804  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:56:14.489856  109339 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:56:14.544887  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:56:14.652125  109339 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-993117" context rescaled to 1 replicas
	I1101 08:56:15.235797  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.262732661s)
	I1101 08:56:15.235856  109339 addons.go:480] Verifying addon ingress=true in "addons-993117"
	I1101 08:56:15.236030  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.232775822s)
	I1101 08:56:15.235969  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.233311002s)
	W1101 08:56:15.236085  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:15.236093  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.22373794s)
	I1101 08:56:15.236110  109339 addons.go:480] Verifying addon registry=true in "addons-993117"
	I1101 08:56:15.236109  109339 retry.go:31] will retry after 143.298607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:15.236205  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.214647969s)
	I1101 08:56:15.236337  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.040437417s)
	I1101 08:56:15.236357  109339 addons.go:480] Verifying addon metrics-server=true in "addons-993117"
	I1101 08:56:15.237852  109339 out.go:179] * Verifying ingress addon...
	I1101 08:56:15.238871  109339 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-993117 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:56:15.238880  109339 out.go:179] * Verifying registry addon...
	I1101 08:56:15.240868  109339 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:56:15.242165  109339 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:56:15.244605  109339 kapi.go:86] Found 2 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:56:15.244624  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:15.245001  109339 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:56:15.245017  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:15.380298  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:15.651574  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.404299736s)
	W1101 08:56:15.651633  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:56:15.651661  109339 retry.go:31] will retry after 240.796509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:56:15.651759  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.398804313s)
	I1101 08:56:15.651808  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.389018672s)
	I1101 08:56:15.652133  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.315715303s)
	I1101 08:56:15.652435  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.107500219s)
	I1101 08:56:15.652497  109339 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-993117"
	I1101 08:56:15.655158  109339 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:56:15.657731  109339 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:56:15.660841  109339 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:56:15.660864  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:15.769537  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:15.769754  109339 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:56:15.769771  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:15.893272  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1101 08:56:16.036218  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:16.036262  109339 retry.go:31] will retry after 392.94291ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:56:16.116423  109339 node_ready.go:57] node "addons-993117" has "Ready":"False" status (will retry)
	I1101 08:56:16.161067  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:16.244072  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:16.245598  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:16.430047  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:16.661436  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:16.761860  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:16.762105  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:17.161873  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:17.244472  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:17.244590  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:17.661899  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:17.744611  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:17.744859  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:56:18.116614  109339 node_ready.go:57] node "addons-993117" has "Ready":"False" status (will retry)
	I1101 08:56:18.160407  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:18.244107  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:18.244752  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:18.379531  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.486200378s)
	I1101 08:56:18.379592  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.949517274s)
	W1101 08:56:18.379616  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:18.379633  109339 retry.go:31] will retry after 794.996455ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:18.661087  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:18.762458  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:18.762532  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:19.161797  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:19.174798  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:19.244599  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:19.245408  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:19.661474  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:56:19.744035  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:19.744074  109339 retry.go:31] will retry after 1.121376386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:19.744955  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:19.745331  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:20.161142  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:20.244123  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:20.244717  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:56:20.616480  109339 node_ready.go:57] node "addons-993117" has "Ready":"False" status (will retry)
	I1101 08:56:20.662385  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:20.744266  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:20.744711  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:20.865754  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:21.161147  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:21.244730  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:21.245054  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:21.355138  109339 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:56:21.355205  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:21.376790  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	W1101 08:56:21.424718  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:21.424756  109339 retry.go:31] will retry after 1.247725993s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:21.485051  109339 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:56:21.497581  109339 addons.go:239] Setting addon gcp-auth=true in "addons-993117"
	I1101 08:56:21.497664  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:21.498087  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:21.516567  109339 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:56:21.516613  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:21.534285  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:21.632486  109339 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:56:21.634119  109339 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:56:21.635395  109339 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:56:21.635416  109339 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:56:21.649489  109339 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:56:21.649514  109339 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:56:21.661828  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:21.663054  109339 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:56:21.663073  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:56:21.676527  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:56:21.744054  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:21.745588  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:21.987363  109339 addons.go:480] Verifying addon gcp-auth=true in "addons-993117"
	I1101 08:56:21.988833  109339 out.go:179] * Verifying gcp-auth addon...
	I1101 08:56:21.990814  109339 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:56:21.993110  109339 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:56:21.993125  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:22.160971  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:22.244078  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:22.245555  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:22.494484  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:22.661310  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:22.673391  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:22.744331  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:22.745844  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:22.994423  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:56:23.116269  109339 node_ready.go:57] node "addons-993117" has "Ready":"False" status (will retry)
	I1101 08:56:23.161260  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:56:23.216011  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:23.216047  109339 retry.go:31] will retry after 1.037960417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:23.244060  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:23.244383  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:23.494202  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:23.661574  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:23.744664  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:23.744660  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:23.994635  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:24.161126  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:24.243894  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:24.245439  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:24.254628  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:24.494292  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:24.661495  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:24.744001  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:24.744990  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:56:24.797863  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:24.797896  109339 retry.go:31] will retry after 3.053906263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:24.995401  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:25.116797  109339 node_ready.go:49] node "addons-993117" is "Ready"
	I1101 08:56:25.116865  109339 node_ready.go:38] duration metric: took 11.003760435s for node "addons-993117" to be "Ready" ...
	I1101 08:56:25.116887  109339 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:56:25.116977  109339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:56:25.134829  109339 api_server.go:72] duration metric: took 11.535579277s to wait for apiserver process to appear ...
	I1101 08:56:25.134860  109339 api_server.go:88] waiting for apiserver healthz status ...
	I1101 08:56:25.134885  109339 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 08:56:25.139615  109339 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 08:56:25.140670  109339 api_server.go:141] control plane version: v1.34.1
	I1101 08:56:25.140715  109339 api_server.go:131] duration metric: took 5.847732ms to wait for apiserver health ...
	I1101 08:56:25.140724  109339 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:56:25.146244  109339 system_pods.go:59] 20 kube-system pods found
	I1101 08:56:25.146284  109339 system_pods.go:61] "amd-gpu-device-plugin-ldw4v" [d8470b34-a718-4170-8f5a-08c89ef719f6] Pending
	I1101 08:56:25.146298  109339 system_pods.go:61] "coredns-66bc5c9577-fpzpv" [90913b1b-6b7d-428f-b9e4-faeddafa95ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:56:25.146305  109339 system_pods.go:61] "csi-hostpath-attacher-0" [8eb86797-b6e6-477f-b198-4ffc2834d53b] Pending
	I1101 08:56:25.146313  109339 system_pods.go:61] "csi-hostpath-resizer-0" [f538b791-0db5-404c-bc3e-d9793e0ad79e] Pending
	I1101 08:56:25.146318  109339 system_pods.go:61] "csi-hostpathplugin-vpnz6" [faf5fdcb-9600-4496-8fab-723b26e72a4d] Pending
	I1101 08:56:25.146323  109339 system_pods.go:61] "etcd-addons-993117" [01769101-ae6c-4278-ba0d-dd10ee066307] Running
	I1101 08:56:25.146329  109339 system_pods.go:61] "kindnet-5ln5h" [91f034ba-31e4-4857-8376-38426a1783ae] Running
	I1101 08:56:25.146335  109339 system_pods.go:61] "kube-apiserver-addons-993117" [bfe58862-7a79-43ca-ad37-eb331735f258] Running
	I1101 08:56:25.146345  109339 system_pods.go:61] "kube-controller-manager-addons-993117" [5fa456c5-43c8-4897-bac5-1f06c09d0242] Running
	I1101 08:56:25.146353  109339 system_pods.go:61] "kube-ingress-dns-minikube" [70b84fac-f831-40ae-aed1-ed0c6577288e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:56:25.146363  109339 system_pods.go:61] "kube-proxy-z7fst" [6e767c33-b0f8-43e9-b1bd-e57a53fd4781] Running
	I1101 08:56:25.146368  109339 system_pods.go:61] "kube-scheduler-addons-993117" [4c005b14-66e3-4940-8ed0-ee9f7ea81299] Running
	I1101 08:56:25.146376  109339 system_pods.go:61] "metrics-server-85b7d694d7-xfvx6" [e043da64-ca2f-49e1-8af9-25be09cdb56b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:56:25.146382  109339 system_pods.go:61] "nvidia-device-plugin-daemonset-hqm9x" [15bd754a-567b-486e-b302-958c6c35e01b] Pending
	I1101 08:56:25.146390  109339 system_pods.go:61] "registry-6b586f9694-785wk" [48d54e24-0425-4f8e-b67b-dc0f16dbcccc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:56:25.146398  109339 system_pods.go:61] "registry-creds-764b6fb674-9xsjx" [0ba7767f-afca-4206-9242-b5defbf3f5ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:56:25.146406  109339 system_pods.go:61] "registry-proxy-497v5" [3193b72b-c812-4490-b737-26cd9e00a032] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:56:25.146415  109339 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sms8j" [cd1ab6d6-eb23-4cd0-ab4f-8c86f831ce4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.146421  109339 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zl99q" [21db5fa9-5be8-4bb0-851f-267ac47683d4] Pending
	I1101 08:56:25.146427  109339 system_pods.go:61] "storage-provisioner" [f680fb14-9342-4545-bcb0-8b8195aa7950] Pending
	I1101 08:56:25.146435  109339 system_pods.go:74] duration metric: took 5.703263ms to wait for pod list to return data ...
	I1101 08:56:25.146451  109339 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:56:25.151348  109339 default_sa.go:45] found service account: "default"
	I1101 08:56:25.151379  109339 default_sa.go:55] duration metric: took 4.921573ms for default service account to be created ...
	I1101 08:56:25.151392  109339 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:56:25.166874  109339 system_pods.go:86] 20 kube-system pods found
	I1101 08:56:25.166940  109339 system_pods.go:89] "amd-gpu-device-plugin-ldw4v" [d8470b34-a718-4170-8f5a-08c89ef719f6] Pending
	I1101 08:56:25.166955  109339 system_pods.go:89] "coredns-66bc5c9577-fpzpv" [90913b1b-6b7d-428f-b9e4-faeddafa95ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:56:25.166963  109339 system_pods.go:89] "csi-hostpath-attacher-0" [8eb86797-b6e6-477f-b198-4ffc2834d53b] Pending
	I1101 08:56:25.166974  109339 system_pods.go:89] "csi-hostpath-resizer-0" [f538b791-0db5-404c-bc3e-d9793e0ad79e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:56:25.166980  109339 system_pods.go:89] "csi-hostpathplugin-vpnz6" [faf5fdcb-9600-4496-8fab-723b26e72a4d] Pending
	I1101 08:56:25.166986  109339 system_pods.go:89] "etcd-addons-993117" [01769101-ae6c-4278-ba0d-dd10ee066307] Running
	I1101 08:56:25.166994  109339 system_pods.go:89] "kindnet-5ln5h" [91f034ba-31e4-4857-8376-38426a1783ae] Running
	I1101 08:56:25.167001  109339 system_pods.go:89] "kube-apiserver-addons-993117" [bfe58862-7a79-43ca-ad37-eb331735f258] Running
	I1101 08:56:25.167007  109339 system_pods.go:89] "kube-controller-manager-addons-993117" [5fa456c5-43c8-4897-bac5-1f06c09d0242] Running
	I1101 08:56:25.167018  109339 system_pods.go:89] "kube-ingress-dns-minikube" [70b84fac-f831-40ae-aed1-ed0c6577288e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:56:25.167024  109339 system_pods.go:89] "kube-proxy-z7fst" [6e767c33-b0f8-43e9-b1bd-e57a53fd4781] Running
	I1101 08:56:25.167031  109339 system_pods.go:89] "kube-scheduler-addons-993117" [4c005b14-66e3-4940-8ed0-ee9f7ea81299] Running
	I1101 08:56:25.167039  109339 system_pods.go:89] "metrics-server-85b7d694d7-xfvx6" [e043da64-ca2f-49e1-8af9-25be09cdb56b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:56:25.167044  109339 system_pods.go:89] "nvidia-device-plugin-daemonset-hqm9x" [15bd754a-567b-486e-b302-958c6c35e01b] Pending
	I1101 08:56:25.167053  109339 system_pods.go:89] "registry-6b586f9694-785wk" [48d54e24-0425-4f8e-b67b-dc0f16dbcccc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:56:25.167060  109339 system_pods.go:89] "registry-creds-764b6fb674-9xsjx" [0ba7767f-afca-4206-9242-b5defbf3f5ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:56:25.167068  109339 system_pods.go:89] "registry-proxy-497v5" [3193b72b-c812-4490-b737-26cd9e00a032] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:56:25.167077  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sms8j" [cd1ab6d6-eb23-4cd0-ab4f-8c86f831ce4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.167084  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zl99q" [21db5fa9-5be8-4bb0-851f-267ac47683d4] Pending
	I1101 08:56:25.167090  109339 system_pods.go:89] "storage-provisioner" [f680fb14-9342-4545-bcb0-8b8195aa7950] Pending
	I1101 08:56:25.167111  109339 retry.go:31] will retry after 311.66806ms: missing components: kube-dns
	I1101 08:56:25.173869  109339 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:56:25.173898  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:25.244720  109339 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:56:25.244744  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:25.245000  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:25.483437  109339 system_pods.go:86] 20 kube-system pods found
	I1101 08:56:25.483477  109339 system_pods.go:89] "amd-gpu-device-plugin-ldw4v" [d8470b34-a718-4170-8f5a-08c89ef719f6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:56:25.483515  109339 system_pods.go:89] "coredns-66bc5c9577-fpzpv" [90913b1b-6b7d-428f-b9e4-faeddafa95ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:56:25.483525  109339 system_pods.go:89] "csi-hostpath-attacher-0" [8eb86797-b6e6-477f-b198-4ffc2834d53b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:56:25.483533  109339 system_pods.go:89] "csi-hostpath-resizer-0" [f538b791-0db5-404c-bc3e-d9793e0ad79e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:56:25.483546  109339 system_pods.go:89] "csi-hostpathplugin-vpnz6" [faf5fdcb-9600-4496-8fab-723b26e72a4d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:56:25.483553  109339 system_pods.go:89] "etcd-addons-993117" [01769101-ae6c-4278-ba0d-dd10ee066307] Running
	I1101 08:56:25.483559  109339 system_pods.go:89] "kindnet-5ln5h" [91f034ba-31e4-4857-8376-38426a1783ae] Running
	I1101 08:56:25.483564  109339 system_pods.go:89] "kube-apiserver-addons-993117" [bfe58862-7a79-43ca-ad37-eb331735f258] Running
	I1101 08:56:25.483620  109339 system_pods.go:89] "kube-controller-manager-addons-993117" [5fa456c5-43c8-4897-bac5-1f06c09d0242] Running
	I1101 08:56:25.483629  109339 system_pods.go:89] "kube-ingress-dns-minikube" [70b84fac-f831-40ae-aed1-ed0c6577288e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:56:25.483634  109339 system_pods.go:89] "kube-proxy-z7fst" [6e767c33-b0f8-43e9-b1bd-e57a53fd4781] Running
	I1101 08:56:25.483640  109339 system_pods.go:89] "kube-scheduler-addons-993117" [4c005b14-66e3-4940-8ed0-ee9f7ea81299] Running
	I1101 08:56:25.483647  109339 system_pods.go:89] "metrics-server-85b7d694d7-xfvx6" [e043da64-ca2f-49e1-8af9-25be09cdb56b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:56:25.483660  109339 system_pods.go:89] "nvidia-device-plugin-daemonset-hqm9x" [15bd754a-567b-486e-b302-958c6c35e01b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:56:25.483669  109339 system_pods.go:89] "registry-6b586f9694-785wk" [48d54e24-0425-4f8e-b67b-dc0f16dbcccc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:56:25.483682  109339 system_pods.go:89] "registry-creds-764b6fb674-9xsjx" [0ba7767f-afca-4206-9242-b5defbf3f5ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:56:25.483694  109339 system_pods.go:89] "registry-proxy-497v5" [3193b72b-c812-4490-b737-26cd9e00a032] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:56:25.483702  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sms8j" [cd1ab6d6-eb23-4cd0-ab4f-8c86f831ce4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.483710  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zl99q" [21db5fa9-5be8-4bb0-851f-267ac47683d4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.483717  109339 system_pods.go:89] "storage-provisioner" [f680fb14-9342-4545-bcb0-8b8195aa7950] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:56:25.483737  109339 retry.go:31] will retry after 244.489184ms: missing components: kube-dns
	I1101 08:56:25.582581  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:25.683882  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:25.734094  109339 system_pods.go:86] 20 kube-system pods found
	I1101 08:56:25.734138  109339 system_pods.go:89] "amd-gpu-device-plugin-ldw4v" [d8470b34-a718-4170-8f5a-08c89ef719f6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:56:25.734151  109339 system_pods.go:89] "coredns-66bc5c9577-fpzpv" [90913b1b-6b7d-428f-b9e4-faeddafa95ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:56:25.734161  109339 system_pods.go:89] "csi-hostpath-attacher-0" [8eb86797-b6e6-477f-b198-4ffc2834d53b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:56:25.734169  109339 system_pods.go:89] "csi-hostpath-resizer-0" [f538b791-0db5-404c-bc3e-d9793e0ad79e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:56:25.734179  109339 system_pods.go:89] "csi-hostpathplugin-vpnz6" [faf5fdcb-9600-4496-8fab-723b26e72a4d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:56:25.734190  109339 system_pods.go:89] "etcd-addons-993117" [01769101-ae6c-4278-ba0d-dd10ee066307] Running
	I1101 08:56:25.734198  109339 system_pods.go:89] "kindnet-5ln5h" [91f034ba-31e4-4857-8376-38426a1783ae] Running
	I1101 08:56:25.734204  109339 system_pods.go:89] "kube-apiserver-addons-993117" [bfe58862-7a79-43ca-ad37-eb331735f258] Running
	I1101 08:56:25.734209  109339 system_pods.go:89] "kube-controller-manager-addons-993117" [5fa456c5-43c8-4897-bac5-1f06c09d0242] Running
	I1101 08:56:25.734217  109339 system_pods.go:89] "kube-ingress-dns-minikube" [70b84fac-f831-40ae-aed1-ed0c6577288e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:56:25.734227  109339 system_pods.go:89] "kube-proxy-z7fst" [6e767c33-b0f8-43e9-b1bd-e57a53fd4781] Running
	I1101 08:56:25.734233  109339 system_pods.go:89] "kube-scheduler-addons-993117" [4c005b14-66e3-4940-8ed0-ee9f7ea81299] Running
	I1101 08:56:25.734241  109339 system_pods.go:89] "metrics-server-85b7d694d7-xfvx6" [e043da64-ca2f-49e1-8af9-25be09cdb56b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:56:25.734249  109339 system_pods.go:89] "nvidia-device-plugin-daemonset-hqm9x" [15bd754a-567b-486e-b302-958c6c35e01b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:56:25.734260  109339 system_pods.go:89] "registry-6b586f9694-785wk" [48d54e24-0425-4f8e-b67b-dc0f16dbcccc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:56:25.734268  109339 system_pods.go:89] "registry-creds-764b6fb674-9xsjx" [0ba7767f-afca-4206-9242-b5defbf3f5ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:56:25.734280  109339 system_pods.go:89] "registry-proxy-497v5" [3193b72b-c812-4490-b737-26cd9e00a032] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:56:25.734288  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sms8j" [cd1ab6d6-eb23-4cd0-ab4f-8c86f831ce4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.734298  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zl99q" [21db5fa9-5be8-4bb0-851f-267ac47683d4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.734306  109339 system_pods.go:89] "storage-provisioner" [f680fb14-9342-4545-bcb0-8b8195aa7950] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:56:25.734329  109339 retry.go:31] will retry after 378.600191ms: missing components: kube-dns
	I1101 08:56:25.744493  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:25.744714  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:25.996391  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:26.119109  109339 system_pods.go:86] 20 kube-system pods found
	I1101 08:56:26.119162  109339 system_pods.go:89] "amd-gpu-device-plugin-ldw4v" [d8470b34-a718-4170-8f5a-08c89ef719f6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:56:26.119178  109339 system_pods.go:89] "coredns-66bc5c9577-fpzpv" [90913b1b-6b7d-428f-b9e4-faeddafa95ca] Running
	I1101 08:56:26.119195  109339 system_pods.go:89] "csi-hostpath-attacher-0" [8eb86797-b6e6-477f-b198-4ffc2834d53b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:56:26.119210  109339 system_pods.go:89] "csi-hostpath-resizer-0" [f538b791-0db5-404c-bc3e-d9793e0ad79e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:56:26.119221  109339 system_pods.go:89] "csi-hostpathplugin-vpnz6" [faf5fdcb-9600-4496-8fab-723b26e72a4d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:56:26.119229  109339 system_pods.go:89] "etcd-addons-993117" [01769101-ae6c-4278-ba0d-dd10ee066307] Running
	I1101 08:56:26.119236  109339 system_pods.go:89] "kindnet-5ln5h" [91f034ba-31e4-4857-8376-38426a1783ae] Running
	I1101 08:56:26.119247  109339 system_pods.go:89] "kube-apiserver-addons-993117" [bfe58862-7a79-43ca-ad37-eb331735f258] Running
	I1101 08:56:26.119259  109339 system_pods.go:89] "kube-controller-manager-addons-993117" [5fa456c5-43c8-4897-bac5-1f06c09d0242] Running
	I1101 08:56:26.119274  109339 system_pods.go:89] "kube-ingress-dns-minikube" [70b84fac-f831-40ae-aed1-ed0c6577288e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:56:26.119284  109339 system_pods.go:89] "kube-proxy-z7fst" [6e767c33-b0f8-43e9-b1bd-e57a53fd4781] Running
	I1101 08:56:26.119291  109339 system_pods.go:89] "kube-scheduler-addons-993117" [4c005b14-66e3-4940-8ed0-ee9f7ea81299] Running
	I1101 08:56:26.119305  109339 system_pods.go:89] "metrics-server-85b7d694d7-xfvx6" [e043da64-ca2f-49e1-8af9-25be09cdb56b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:56:26.119318  109339 system_pods.go:89] "nvidia-device-plugin-daemonset-hqm9x" [15bd754a-567b-486e-b302-958c6c35e01b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:56:26.119330  109339 system_pods.go:89] "registry-6b586f9694-785wk" [48d54e24-0425-4f8e-b67b-dc0f16dbcccc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:56:26.119338  109339 system_pods.go:89] "registry-creds-764b6fb674-9xsjx" [0ba7767f-afca-4206-9242-b5defbf3f5ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:56:26.119349  109339 system_pods.go:89] "registry-proxy-497v5" [3193b72b-c812-4490-b737-26cd9e00a032] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:56:26.119362  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sms8j" [cd1ab6d6-eb23-4cd0-ab4f-8c86f831ce4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:26.119372  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zl99q" [21db5fa9-5be8-4bb0-851f-267ac47683d4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:26.119380  109339 system_pods.go:89] "storage-provisioner" [f680fb14-9342-4545-bcb0-8b8195aa7950] Running
	I1101 08:56:26.119392  109339 system_pods.go:126] duration metric: took 967.992599ms to wait for k8s-apps to be running ...
	I1101 08:56:26.119406  109339 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:56:26.119468  109339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:56:26.137503  109339 system_svc.go:56] duration metric: took 18.086345ms WaitForService to wait for kubelet
	I1101 08:56:26.137534  109339 kubeadm.go:587] duration metric: took 12.538288508s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:56:26.137558  109339 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:56:26.141070  109339 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 08:56:26.141104  109339 node_conditions.go:123] node cpu capacity is 8
	I1101 08:56:26.141119  109339 node_conditions.go:105] duration metric: took 3.554596ms to run NodePressure ...
	I1101 08:56:26.141136  109339 start.go:242] waiting for startup goroutines ...
	I1101 08:56:26.161136  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:26.244109  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:26.245573  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:26.495240  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:26.661934  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:26.762615  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:26.762582  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:26.995236  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:27.161803  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:27.244862  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:27.245280  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:27.493837  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:27.661942  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:27.745119  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:27.745250  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:27.852520  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:27.993962  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:28.161744  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:28.246717  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:28.246762  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:28.493816  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:56:28.529383  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:28.529423  109339 retry.go:31] will retry after 4.93840652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:28.662588  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:28.763557  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:28.763554  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:28.995438  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:29.162118  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:29.247384  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:29.247450  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:29.494932  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:29.661681  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:29.745145  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:29.745345  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:29.994268  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:30.161943  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:30.244940  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:30.245373  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:30.494515  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:30.662824  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:30.764453  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:30.764570  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:30.994286  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:31.161709  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:31.244317  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:31.244948  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:31.493816  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:31.661481  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:31.744330  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:31.744719  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:31.993895  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:32.161657  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:32.244932  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:32.245032  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:32.494412  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:32.662213  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:32.745489  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:32.745540  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:32.994812  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:33.161355  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:33.244044  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:33.244661  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:33.468028  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:33.494787  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:33.662566  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:33.744663  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:33.745032  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:33.994530  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:56:34.008282  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:34.008314  109339 retry.go:31] will retry after 7.842026789s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:34.161976  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:34.245652  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:34.246039  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:34.495512  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:34.662500  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:34.745375  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:34.746283  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:34.994220  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:35.162056  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:35.244890  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:35.246782  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:35.496528  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:35.786070  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:35.786091  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:35.786383  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:35.994248  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:36.162016  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:36.245229  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:36.245384  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:36.494536  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:36.661717  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:36.744767  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:36.744982  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:36.994601  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:37.161960  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:37.244814  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:37.245177  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:37.494406  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:37.678943  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:37.745852  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:37.746083  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:37.994335  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:38.161472  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:38.244423  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:38.244978  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:38.494077  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:38.661328  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:38.743940  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:38.745884  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:38.994306  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:39.161603  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:39.244321  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:39.245009  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:39.494302  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:39.662045  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:39.745364  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:39.745366  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:39.994945  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:40.161533  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:40.245212  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:40.245447  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:40.494389  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:40.661523  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:40.744478  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:40.745007  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:40.994291  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:41.161314  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:41.243997  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:41.244524  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:41.494491  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:41.662207  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:41.744504  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:41.745553  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:41.850904  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:41.995662  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:42.162732  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:42.244350  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:42.245247  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:42.493752  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:42.661743  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:56:42.699170  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:42.699211  109339 retry.go:31] will retry after 11.303479007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:42.744298  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:42.744628  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:42.994705  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:43.162046  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:43.244609  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:43.245168  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:43.576328  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:43.737011  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:43.778756  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:43.779028  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:43.993656  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:44.161991  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:44.245639  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:44.245961  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:44.494498  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:44.661782  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:44.744425  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:44.745867  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:44.994289  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:45.161690  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:45.245006  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:45.245149  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:45.493877  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:45.661339  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:45.744702  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:45.745120  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:45.994674  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:46.162521  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:46.244853  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:46.245012  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:46.497344  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:46.661382  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:46.762368  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:46.762543  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:46.994723  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:47.161439  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:47.261448  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:47.261448  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:47.494650  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:47.662088  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:47.745026  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:47.745230  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:47.994546  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:48.160972  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:48.244787  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:48.245512  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:48.494366  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:48.661765  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:48.744630  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:48.745005  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:48.993998  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:49.160770  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:49.244350  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:49.245218  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:49.493754  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:49.660860  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:49.744625  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:49.745163  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:49.994730  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:50.163052  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:50.244881  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:50.245027  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:50.493896  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:50.662134  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:50.744251  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:50.745980  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:50.994853  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:51.161509  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:51.244639  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:51.244943  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:51.494949  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:51.661333  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:51.747512  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:51.747543  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:51.994905  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:52.160881  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:52.244878  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:52.245193  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:52.493888  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:52.661379  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:52.744098  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:52.745168  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:52.994305  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:53.163706  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:53.244559  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:53.245254  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:53.493972  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:53.661060  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:53.745022  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:53.745067  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:53.994444  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:54.003675  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:54.161736  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:54.244484  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:54.245027  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:54.494275  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:56:54.551407  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:54.551444  109339 retry.go:31] will retry after 17.625597397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:54.661525  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:54.744571  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:54.744984  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:54.994034  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:55.162205  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:55.245360  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:55.245635  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:55.495093  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:55.661540  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:55.744671  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:55.745099  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:55.994762  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:56.161820  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:56.244739  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:56.245185  109339 kapi.go:107] duration metric: took 41.003020729s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 08:56:56.494624  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:56.661785  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:56.744855  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:56.993850  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:57.162856  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:57.244578  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:57.495362  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:57.662064  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:57.744859  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:57.994460  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:58.161402  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:58.244553  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:58.494725  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:58.661737  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:58.825352  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:58.994598  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:59.162176  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:59.244549  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:59.494091  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:59.662978  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:59.746502  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:59.999964  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:00.164525  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:00.244939  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:00.493763  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:00.661866  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:00.745118  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:00.993821  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:01.161121  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:01.275323  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:01.494355  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:01.664078  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:01.745650  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:01.995427  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:02.162488  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:02.244497  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:02.494993  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:02.661490  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:02.744429  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:02.994908  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:03.161519  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:03.264630  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:03.494836  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:03.661394  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:03.744683  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:03.994775  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:04.160795  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:04.244685  109339 kapi.go:107] duration metric: took 49.003814841s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:57:04.494534  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:04.723627  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:04.994136  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:05.161339  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:05.493796  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:05.661295  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:05.994519  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:06.162349  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:06.494270  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:06.662185  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:06.995073  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:07.161980  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:07.494319  109339 kapi.go:107] duration metric: took 45.503501642s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:57:07.495602  109339 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-993117 cluster.
	I1101 08:57:07.497432  109339 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:57:07.498683  109339 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:57:07.663225  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:08.161563  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:08.661165  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:09.162317  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:09.661410  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:10.161291  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:10.661317  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:11.160989  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:11.662456  109339 kapi.go:107] duration metric: took 56.0047251s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:57:12.177739  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:57:12.734441  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:57:12.734477  109339 retry.go:31] will retry after 16.494145924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:57:29.230132  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:57:29.769200  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:57:29.769234  109339 retry.go:31] will retry after 41.872417481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:58:11.644068  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:58:12.189889  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:58:12.190041  109339 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:58:12.192346  109339 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, registry-creds, metrics-server, yakd, storage-provisioner-rancher, storage-provisioner, ingress-dns, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1101 08:58:12.193974  109339 addons.go:515] duration metric: took 1m58.594666203s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner registry-creds metrics-server yakd storage-provisioner-rancher storage-provisioner ingress-dns default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1101 08:58:12.194027  109339 start.go:247] waiting for cluster config update ...
	I1101 08:58:12.194057  109339 start.go:256] writing updated cluster config ...
	I1101 08:58:12.194343  109339 ssh_runner.go:195] Run: rm -f paused
	I1101 08:58:12.198555  109339 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:58:12.202749  109339 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fpzpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.207288  109339 pod_ready.go:94] pod "coredns-66bc5c9577-fpzpv" is "Ready"
	I1101 08:58:12.207312  109339 pod_ready.go:86] duration metric: took 4.537945ms for pod "coredns-66bc5c9577-fpzpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.209703  109339 pod_ready.go:83] waiting for pod "etcd-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.213776  109339 pod_ready.go:94] pod "etcd-addons-993117" is "Ready"
	I1101 08:58:12.213797  109339 pod_ready.go:86] duration metric: took 4.074176ms for pod "etcd-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.215542  109339 pod_ready.go:83] waiting for pod "kube-apiserver-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.219406  109339 pod_ready.go:94] pod "kube-apiserver-addons-993117" is "Ready"
	I1101 08:58:12.219429  109339 pod_ready.go:86] duration metric: took 3.866311ms for pod "kube-apiserver-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.221333  109339 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.603283  109339 pod_ready.go:94] pod "kube-controller-manager-addons-993117" is "Ready"
	I1101 08:58:12.603324  109339 pod_ready.go:86] duration metric: took 381.969936ms for pod "kube-controller-manager-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.802306  109339 pod_ready.go:83] waiting for pod "kube-proxy-z7fst" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:13.202602  109339 pod_ready.go:94] pod "kube-proxy-z7fst" is "Ready"
	I1101 08:58:13.202630  109339 pod_ready.go:86] duration metric: took 400.299281ms for pod "kube-proxy-z7fst" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:13.403073  109339 pod_ready.go:83] waiting for pod "kube-scheduler-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:13.803533  109339 pod_ready.go:94] pod "kube-scheduler-addons-993117" is "Ready"
	I1101 08:58:13.803571  109339 pod_ready.go:86] duration metric: took 400.467584ms for pod "kube-scheduler-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:13.803586  109339 pod_ready.go:40] duration metric: took 1.604993955s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:58:13.850600  109339 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 08:58:13.852415  109339 out.go:179] * Done! kubectl is now configured to use "addons-993117" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:00:35 addons-993117 crio[769]: time="2025-11-01T09:00:35.14681434Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=fb52ade8-f1f6-488d-9ab3-5ba149404cb5 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:00:35 addons-993117 crio[769]: time="2025-11-01T09:00:35.151633543Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Nov 01 09:00:36 addons-993117 crio[769]: time="2025-11-01T09:00:36.688614124Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=fb52ade8-f1f6-488d-9ab3-5ba149404cb5 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:00:36 addons-993117 crio[769]: time="2025-11-01T09:00:36.689294539Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=2529991d-4c5d-4bdb-8a7a-7ef4ea249a3b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:00:36 addons-993117 crio[769]: time="2025-11-01T09:00:36.723058868Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=0aa4ecdf-4a7f-4f3e-a157-f2241c0c5c4c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:00:36 addons-993117 crio[769]: time="2025-11-01T09:00:36.726866286Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-9xsjx/registry-creds" id=7ce57887-6a84-42d2-801e-1033d1a1765b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:00:36 addons-993117 crio[769]: time="2025-11-01T09:00:36.727024215Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:00:36 addons-993117 crio[769]: time="2025-11-01T09:00:36.732952698Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:00:36 addons-993117 crio[769]: time="2025-11-01T09:00:36.733489266Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:00:36 addons-993117 crio[769]: time="2025-11-01T09:00:36.76432188Z" level=info msg="Created container f1654d9b2115490bef868b715b4de77679849692e4fe2d4ed953ddeff134b869: kube-system/registry-creds-764b6fb674-9xsjx/registry-creds" id=7ce57887-6a84-42d2-801e-1033d1a1765b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:00:36 addons-993117 crio[769]: time="2025-11-01T09:00:36.764965269Z" level=info msg="Starting container: f1654d9b2115490bef868b715b4de77679849692e4fe2d4ed953ddeff134b869" id=6ede34d0-9b10-4e1f-943e-9a7afe03b2e5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:00:36 addons-993117 crio[769]: time="2025-11-01T09:00:36.767170999Z" level=info msg="Started container" PID=9706 containerID=f1654d9b2115490bef868b715b4de77679849692e4fe2d4ed953ddeff134b869 description=kube-system/registry-creds-764b6fb674-9xsjx/registry-creds id=6ede34d0-9b10-4e1f-943e-9a7afe03b2e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=560e3de5f0501d46eb33ce0ba4a9e96f3ed302ccf923c0a90054369d9490caa5
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.207733587Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-4l5gq/POD" id=50c44563-d0d9-49eb-a751-53c721cc9e6a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.207854569Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.216056702Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-4l5gq Namespace:default ID:5f1dbf25989236e39411c49378df224f7c8376dbf5dda49c8b7b89e108754e1e UID:ab1549a6-7022-4004-a68c-83db24e1c7e2 NetNS:/var/run/netns/9e0ae06e-cf09-4b6a-94ad-4ac67f95501f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005765b8}] Aliases:map[]}"
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.216098182Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-4l5gq to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.226051709Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-4l5gq Namespace:default ID:5f1dbf25989236e39411c49378df224f7c8376dbf5dda49c8b7b89e108754e1e UID:ab1549a6-7022-4004-a68c-83db24e1c7e2 NetNS:/var/run/netns/9e0ae06e-cf09-4b6a-94ad-4ac67f95501f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005765b8}] Aliases:map[]}"
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.226180688Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-4l5gq for CNI network kindnet (type=ptp)"
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.227115911Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.227871359Z" level=info msg="Ran pod sandbox 5f1dbf25989236e39411c49378df224f7c8376dbf5dda49c8b7b89e108754e1e with infra container: default/hello-world-app-5d498dc89-4l5gq/POD" id=50c44563-d0d9-49eb-a751-53c721cc9e6a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.229353005Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=50e812c3-9cc5-4568-923d-682a2439176f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.229495575Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=50e812c3-9cc5-4568-923d-682a2439176f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.229542818Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=50e812c3-9cc5-4568-923d-682a2439176f name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.230181129Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=ea5cb1f6-6421-489b-a59d-ae63ddcf8f85 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:01:05 addons-993117 crio[769]: time="2025-11-01T09:01:05.234763949Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	f1654d9b21154       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             29 seconds ago      Running             registry-creds                           0                   560e3de5f0501       registry-creds-764b6fb674-9xsjx             kube-system
	f4fbb868fc65b       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago       Running             nginx                                    0                   1f58f62698149       nginx                                       default
	5d68fdf48825c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago       Running             busybox                                  0                   98b064e519322       busybox                                     default
	3b35b0d070189       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago       Running             csi-snapshotter                          0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	8dc1437b90151       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago       Running             csi-provisioner                          0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	804b66311e935       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago       Running             liveness-probe                           0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	2821b9f559e62       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago       Running             hostpath                                 0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	8b1063e261681       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago       Running             gcp-auth                                 0                   19a2d27ca5086       gcp-auth-78565c9fb4-g7cqf                   gcp-auth
	b08eff5e2d492       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                4 minutes ago       Running             node-driver-registrar                    0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	571bf53478339       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             4 minutes ago       Running             controller                               0                   cc20cb30fbcef       ingress-nginx-controller-675c5ddd98-8fg7m   ingress-nginx
	2e7ae1f1e2452       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            4 minutes ago       Running             gadget                                   0                   d8c3f9ab14021       gadget-92zrk                                gadget
	b6a9d7748ccc5       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              4 minutes ago       Running             registry-proxy                           0                   3460a4aad78f4       registry-proxy-497v5                        kube-system
	751b58c8fd0aa       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     4 minutes ago       Running             amd-gpu-device-plugin                    0                   e3a1cf7764ab6       amd-gpu-device-plugin-ldw4v                 kube-system
	cf726f61ce62e       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     4 minutes ago       Running             nvidia-device-plugin-ctr                 0                   f342fa87f62b7       nvidia-device-plugin-daemonset-hqm9x        kube-system
	cfc14b381b0aa       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   4 minutes ago       Running             csi-external-health-monitor-controller   0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	10ebfd823db73       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago       Running             volume-snapshot-controller               0                   e7e2357557149       snapshot-controller-7d9fbc56b8-zl99q        kube-system
	7933addcfb16f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      4 minutes ago       Running             volume-snapshot-controller               0                   d7b05be0b0bc1       snapshot-controller-7d9fbc56b8-sms8j        kube-system
	1fb99f095c842       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             4 minutes ago       Running             csi-attacher                             0                   0fbf167f5bbdf       csi-hostpath-attacher-0                     kube-system
	847964df5e7f5       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              4 minutes ago       Running             csi-resizer                              0                   de40dd2b545a2       csi-hostpath-resizer-0                      kube-system
	9ccdb57d51398       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   4 minutes ago       Exited              patch                                    0                   16a7229b5dd32       ingress-nginx-admission-patch-t2bh9         ingress-nginx
	a6545c336c6d0       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              4 minutes ago       Running             yakd                                     0                   a585e7a3cc894       yakd-dashboard-5ff678cb9-b4vdn              yakd-dashboard
	b9a7d4e2630d2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   4 minutes ago       Exited              create                                   0                   fcf4a1a0bae6f       ingress-nginx-admission-create-p6ghj        ingress-nginx
	31237bb5ad80b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             4 minutes ago       Running             local-path-provisioner                   0                   de5c323f8e9d4       local-path-provisioner-648f6765c9-cszjc     local-path-storage
	903d4bbf18d4c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               4 minutes ago       Running             minikube-ingress-dns                     0                   b936419bd7cb6       kube-ingress-dns-minikube                   kube-system
	a0559bd812da6       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           4 minutes ago       Running             registry                                 0                   83fa7d0923f27       registry-6b586f9694-785wk                   kube-system
	32c59991365e7       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               4 minutes ago       Running             cloud-spanner-emulator                   0                   53765ec0d3bdd       cloud-spanner-emulator-86bd5cbb97-gbmhj     default
	0d2603d622294       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        4 minutes ago       Running             metrics-server                           0                   6f50ba1952109       metrics-server-85b7d694d7-xfvx6             kube-system
	d1be24b1775c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             4 minutes ago       Running             storage-provisioner                      0                   a3338291f8bbc       storage-provisioner                         kube-system
	3bd1589cbc2c1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             4 minutes ago       Running             coredns                                  0                   817c2c2ee349a       coredns-66bc5c9577-fpzpv                    kube-system
	4d446343c7b2f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago       Running             kindnet-cni                              0                   75245beea1e10       kindnet-5ln5h                               kube-system
	9838dcae88ecb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago       Running             kube-proxy                               0                   fa5bb0d34c3fc       kube-proxy-z7fst                            kube-system
	4ff46c8fd9e89       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             5 minutes ago       Running             kube-controller-manager                  0                   97649c15ee686       kube-controller-manager-addons-993117       kube-system
	780e64dcae645       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             5 minutes ago       Running             etcd                                     0                   db84185288f9b       etcd-addons-993117                          kube-system
	1c79567a55106       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             5 minutes ago       Running             kube-apiserver                           0                   8e5bafffc8916       kube-apiserver-addons-993117                kube-system
	cc887abb01e9d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             5 minutes ago       Running             kube-scheduler                           0                   e6e2340283b81       kube-scheduler-addons-993117                kube-system
	
	
	==> coredns [3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede] <==
	[INFO] 10.244.0.22:33467 - 57724 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005185573s
	[INFO] 10.244.0.22:43418 - 23601 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00542252s
	[INFO] 10.244.0.22:33019 - 52769 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006848902s
	[INFO] 10.244.0.22:39233 - 39848 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004791417s
	[INFO] 10.244.0.22:51901 - 47745 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004915754s
	[INFO] 10.244.0.22:53901 - 721 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001074142s
	[INFO] 10.244.0.22:48056 - 11565 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001161465s
	[INFO] 10.244.0.25:39740 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000291228s
	[INFO] 10.244.0.25:33146 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000170129s
	[INFO] 10.244.0.31:46669 - 65369 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000293604s
	[INFO] 10.244.0.31:48086 - 22076 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000368543s
	[INFO] 10.244.0.31:51600 - 63199 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000107065s
	[INFO] 10.244.0.31:39924 - 171 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000123461s
	[INFO] 10.244.0.31:58715 - 54796 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000095841s
	[INFO] 10.244.0.31:59113 - 44791 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000101231s
	[INFO] 10.244.0.31:34071 - 32190 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003347461s
	[INFO] 10.244.0.31:34672 - 60152 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003504233s
	[INFO] 10.244.0.31:57322 - 19221 "A IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005041118s
	[INFO] 10.244.0.31:50584 - 14107 "AAAA IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005242998s
	[INFO] 10.244.0.31:37211 - 4051 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.003910305s
	[INFO] 10.244.0.31:54923 - 10024 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004377266s
	[INFO] 10.244.0.31:34239 - 54959 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00444386s
	[INFO] 10.244.0.31:45766 - 21322 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004845327s
	[INFO] 10.244.0.31:38966 - 6852 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001753046s
	[INFO] 10.244.0.31:38135 - 34736 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.002267755s
	
	
	==> describe nodes <==
	Name:               addons-993117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-993117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=addons-993117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_56_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-993117
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-993117"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-993117
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:01:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:00:43 +0000   Sat, 01 Nov 2025 08:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:00:43 +0000   Sat, 01 Nov 2025 08:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:00:43 +0000   Sat, 01 Nov 2025 08:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:00:43 +0000   Sat, 01 Nov 2025 08:56:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-993117
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7df020e8-7b12-4d73-ac54-ad61f7ee33f3
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m52s
	  default                     cloud-spanner-emulator-86bd5cbb97-gbmhj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  default                     hello-world-app-5d498dc89-4l5gq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-92zrk                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  gcp-auth                    gcp-auth-78565c9fb4-g7cqf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-8fg7m    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m51s
	  kube-system                 amd-gpu-device-plugin-ldw4v                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 coredns-66bc5c9577-fpzpv                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m53s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 csi-hostpathplugin-vpnz6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 etcd-addons-993117                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m58s
	  kube-system                 kindnet-5ln5h                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m53s
	  kube-system                 kube-apiserver-addons-993117                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-controller-manager-addons-993117        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-proxy-z7fst                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-scheduler-addons-993117                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         5m
	  kube-system                 metrics-server-85b7d694d7-xfvx6              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m52s
	  kube-system                 nvidia-device-plugin-daemonset-hqm9x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 registry-6b586f9694-785wk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-creds-764b6fb674-9xsjx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-proxy-497v5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 snapshot-controller-7d9fbc56b8-sms8j         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 snapshot-controller-7d9fbc56b8-zl99q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  local-path-storage          local-path-provisioner-648f6765c9-cszjc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-b4vdn               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m51s  kube-proxy       
	  Normal  Starting                 4m59s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m58s  kubelet          Node addons-993117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m58s  kubelet          Node addons-993117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m58s  kubelet          Node addons-993117 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m54s  node-controller  Node addons-993117 event: Registered Node addons-993117 in Controller
	  Normal  NodeReady                4m42s  kubelet          Node addons-993117 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 53 1e 0b f5 f9 08 06
	[ +20.616610] IPv4: martian source 10.244.0.1 from 10.244.0.54, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 5d 8b 4b c3 ca 08 06
	[Nov 1 08:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.063864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023900] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023945] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023903] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +2.047798] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[Nov 1 08:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +8.511341] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[ +16.382756] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[ +32.253538] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	
	
	==> etcd [780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404] <==
	{"level":"warn","ts":"2025-11-01T08:56:04.822048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.828125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.834535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.840564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.846648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.852973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.859188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.865660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.872688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.878870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.885246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.892073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.912752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.919068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.925533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.975766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:16.121119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:16.127489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:35.783444Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.248766ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:56:35.783665Z","caller":"traceutil/trace.go:172","msg":"trace[833785441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:972; }","duration":"123.473715ms","start":"2025-11-01T08:56:35.660166Z","end":"2025-11-01T08:56:35.783640Z","steps":["trace[833785441] 'range keys from in-memory index tree'  (duration: 123.168305ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:56:42.379829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:42.397128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:42.421946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:42.430932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53746","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T08:56:58.823865Z","caller":"traceutil/trace.go:172","msg":"trace[1608055675] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"142.82337ms","start":"2025-11-01T08:56:58.681023Z","end":"2025-11-01T08:56:58.823847Z","steps":["trace[1608055675] 'process raft request'  (duration: 142.65258ms)"],"step_count":1}
	
	
	==> gcp-auth [8b1063e261681593de2333b65c9abd4b740fd7fe445a8fc5c87d459bf5213f20] <==
	2025/11/01 08:57:06 GCP Auth Webhook started!
	2025/11/01 08:58:14 Ready to marshal response ...
	2025/11/01 08:58:14 Ready to write response ...
	2025/11/01 08:58:14 Ready to marshal response ...
	2025/11/01 08:58:14 Ready to write response ...
	2025/11/01 08:58:14 Ready to marshal response ...
	2025/11/01 08:58:14 Ready to write response ...
	2025/11/01 08:58:27 Ready to marshal response ...
	2025/11/01 08:58:27 Ready to write response ...
	2025/11/01 08:58:35 Ready to marshal response ...
	2025/11/01 08:58:35 Ready to write response ...
	2025/11/01 08:58:35 Ready to marshal response ...
	2025/11/01 08:58:35 Ready to write response ...
	2025/11/01 08:58:35 Ready to marshal response ...
	2025/11/01 08:58:35 Ready to write response ...
	2025/11/01 08:58:40 Ready to marshal response ...
	2025/11/01 08:58:40 Ready to write response ...
	2025/11/01 08:58:41 Ready to marshal response ...
	2025/11/01 08:58:41 Ready to write response ...
	2025/11/01 08:58:48 Ready to marshal response ...
	2025/11/01 08:58:48 Ready to write response ...
	2025/11/01 09:01:04 Ready to marshal response ...
	2025/11/01 09:01:04 Ready to write response ...
	
	
	==> kernel <==
	 09:01:06 up 43 min,  0 user,  load average: 0.14, 0.58, 0.63
	Linux addons-993117 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352] <==
	I1101 08:59:04.832237       1 main.go:301] handling current node
	I1101 08:59:14.832247       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:59:14.832274       1 main.go:301] handling current node
	I1101 08:59:24.833187       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:59:24.833223       1 main.go:301] handling current node
	I1101 08:59:34.833216       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:59:34.833293       1 main.go:301] handling current node
	I1101 08:59:44.832498       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:59:44.832529       1 main.go:301] handling current node
	I1101 08:59:54.832957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:59:54.832987       1 main.go:301] handling current node
	I1101 09:00:04.832637       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:00:04.832668       1 main.go:301] handling current node
	I1101 09:00:14.832692       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:00:14.832724       1 main.go:301] handling current node
	I1101 09:00:24.833096       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:00:24.833143       1 main.go:301] handling current node
	I1101 09:00:34.832766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:00:34.832797       1 main.go:301] handling current node
	I1101 09:00:44.832710       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:00:44.832740       1 main.go:301] handling current node
	I1101 09:00:54.832450       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:00:54.832487       1 main.go:301] handling current node
	I1101 09:01:04.832460       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:01:04.832499       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286] <==
	E1101 08:56:24.963639       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.187.34:443: connect: connection refused" logger="UnhandledError"
	W1101 08:56:24.988853       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.187.34:443: connect: connection refused
	E1101 08:56:24.988892       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.187.34:443: connect: connection refused" logger="UnhandledError"
	W1101 08:56:24.989697       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.187.34:443: connect: connection refused
	E1101 08:56:24.989731       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.187.34:443: connect: connection refused" logger="UnhandledError"
	E1101 08:56:28.032729       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.118.136:443: connect: connection refused" logger="UnhandledError"
	W1101 08:56:28.032984       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:56:28.033081       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 08:56:28.033358       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.118.136:443: connect: connection refused" logger="UnhandledError"
	E1101 08:56:28.039345       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.118.136:443: connect: connection refused" logger="UnhandledError"
	E1101 08:56:28.061105       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.118.136:443: connect: connection refused" logger="UnhandledError"
	I1101 08:56:28.132690       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 08:56:42.379698       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 08:56:42.392871       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 08:56:42.421960       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 08:56:42.430991       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1101 08:58:24.549658       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37440: use of closed network connection
	E1101 08:58:24.707835       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37448: use of closed network connection
	I1101 08:58:37.387248       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1101 08:58:40.309176       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1101 08:58:40.507534       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.117.41"}
	I1101 09:01:04.982581       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.69.226"}
	
	
	==> kube-controller-manager [4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719] <==
	I1101 08:56:12.350723       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 08:56:12.350742       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 08:56:12.350765       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 08:56:12.350765       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-993117"
	I1101 08:56:12.350812       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 08:56:12.351038       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 08:56:12.351038       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 08:56:12.351271       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 08:56:12.351282       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 08:56:12.352169       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 08:56:12.352189       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 08:56:12.352243       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 08:56:12.352421       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 08:56:12.352763       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 08:56:12.353986       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:56:12.355068       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 08:56:12.362777       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 08:56:12.370218       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 08:56:14.898750       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1101 08:56:27.355843       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1101 08:56:42.361711       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 08:56:42.361789       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 08:56:42.387832       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 08:56:42.462474       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:56:42.488855       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf] <==
	I1101 08:56:14.283036       1 server_linux.go:53] "Using iptables proxy"
	I1101 08:56:14.620464       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:56:14.735655       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:56:14.737539       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 08:56:14.738988       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:56:14.875285       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 08:56:14.875349       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:56:14.891261       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:56:14.892760       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:56:14.892832       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:56:14.895277       1 config.go:200] "Starting service config controller"
	I1101 08:56:14.895353       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:56:14.895536       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:56:14.895591       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:56:14.896319       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:56:14.896335       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:56:14.904797       1 config.go:309] "Starting node config controller"
	I1101 08:56:14.904851       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:56:14.904860       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:56:15.010700       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:56:15.010804       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:56:15.011155       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252] <==
	E1101 08:56:05.375271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:56:05.375319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:56:05.375339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:56:05.375371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:56:05.375393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:56:05.375415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:56:05.375559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:56:05.375692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:56:05.375716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:56:05.375731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 08:56:05.376222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:56:05.376287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:56:05.376314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:56:06.204430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:56:06.250731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:56:06.310281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:56:06.351304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:56:06.423221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:56:06.476648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:56:06.487743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:56:06.553818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:56:06.588199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:56:06.601405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:56:06.822731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 08:56:08.974238       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 08:58:50 addons-993117 kubelet[1272]: I1101 08:58:50.731655    1272 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb-gcp-creds\") pod \"6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb\" (UID: \"6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb\") "
	Nov 01 08:58:50 addons-993117 kubelet[1272]: I1101 08:58:50.731687    1272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb-data" (OuterVolumeSpecName: "data") pod "6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb" (UID: "6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 08:58:50 addons-993117 kubelet[1272]: I1101 08:58:50.731843    1272 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb-data\") on node \"addons-993117\" DevicePath \"\""
	Nov 01 08:58:50 addons-993117 kubelet[1272]: I1101 08:58:50.731853    1272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb" (UID: "6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 01 08:58:50 addons-993117 kubelet[1272]: I1101 08:58:50.732083    1272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb-script" (OuterVolumeSpecName: "script") pod "6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb" (UID: "6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGIDValue ""
	Nov 01 08:58:50 addons-993117 kubelet[1272]: I1101 08:58:50.734119    1272 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb-kube-api-access-qsp27" (OuterVolumeSpecName: "kube-api-access-qsp27") pod "6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb" (UID: "6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb"). InnerVolumeSpecName "kube-api-access-qsp27". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 01 08:58:50 addons-993117 kubelet[1272]: I1101 08:58:50.833214    1272 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb-script\") on node \"addons-993117\" DevicePath \"\""
	Nov 01 08:58:50 addons-993117 kubelet[1272]: I1101 08:58:50.833248    1272 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb-gcp-creds\") on node \"addons-993117\" DevicePath \"\""
	Nov 01 08:58:50 addons-993117 kubelet[1272]: I1101 08:58:50.833262    1272 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qsp27\" (UniqueName: \"kubernetes.io/projected/6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb-kube-api-access-qsp27\") on node \"addons-993117\" DevicePath \"\""
	Nov 01 08:58:51 addons-993117 kubelet[1272]: I1101 08:58:51.631401    1272 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec079571fecfabdceda63640002943b484a77d90922bcccec841ed8837f09f0f"
	Nov 01 08:58:51 addons-993117 kubelet[1272]: E1101 08:58:51.632800    1272 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-0365a22a-6c12-401f-8fad-405ba975828f\" is forbidden: User \"system:node:addons-993117\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-993117' and this object" podUID="6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb" pod="local-path-storage/helper-pod-delete-pvc-0365a22a-6c12-401f-8fad-405ba975828f"
	Nov 01 08:58:51 addons-993117 kubelet[1272]: I1101 08:58:51.924564    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb" path="/var/lib/kubelet/pods/6ed6b6eb-5bde-4505-9693-30f2b5dbe5bb/volumes"
	Nov 01 08:58:51 addons-993117 kubelet[1272]: I1101 08:58:51.924980    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3db82fb-690a-41c3-bc64-e2aa99edfea2" path="/var/lib/kubelet/pods/a3db82fb-690a-41c3-bc64-e2aa99edfea2/volumes"
	Nov 01 08:59:07 addons-993117 kubelet[1272]: I1101 08:59:07.946452    1272 scope.go:117] "RemoveContainer" containerID="af497e37b272d1fe73b0495af964dd04cff7df043ce259366fa0e1b18f884ce5"
	Nov 01 08:59:07 addons-993117 kubelet[1272]: I1101 08:59:07.954452    1272 scope.go:117] "RemoveContainer" containerID="149ce9bae637e26a74af56914eba36345456b25d9f2c4f3cb9d6bbe23b56743c"
	Nov 01 08:59:07 addons-993117 kubelet[1272]: I1101 08:59:07.963030    1272 scope.go:117] "RemoveContainer" containerID="24a492636611bb91fddd2e69932563d56e97fcfb3f48a63a2a9511d1908dfd2a"
	Nov 01 08:59:10 addons-993117 kubelet[1272]: I1101 08:59:10.921629    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-hqm9x" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:59:16 addons-993117 kubelet[1272]: I1101 08:59:16.921205    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ldw4v" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:59:46 addons-993117 kubelet[1272]: I1101 08:59:46.921086    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-497v5" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:00:29 addons-993117 kubelet[1272]: I1101 09:00:29.921259    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-hqm9x" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:00:35 addons-993117 kubelet[1272]: I1101 09:00:35.921315    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ldw4v" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:00:37 addons-993117 kubelet[1272]: I1101 09:00:37.038055    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-9xsjx" podStartSLOduration=261.494229843 podStartE2EDuration="4m23.038029998s" podCreationTimestamp="2025-11-01 08:56:14 +0000 UTC" firstStartedPulling="2025-11-01 09:00:35.146430731 +0000 UTC m=+267.309224813" lastFinishedPulling="2025-11-01 09:00:36.690230898 +0000 UTC m=+268.853024968" observedRunningTime="2025-11-01 09:00:37.037196442 +0000 UTC m=+269.199990531" watchObservedRunningTime="2025-11-01 09:00:37.038029998 +0000 UTC m=+269.200824088"
	Nov 01 09:00:47 addons-993117 kubelet[1272]: I1101 09:00:47.923312    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-497v5" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 09:01:04 addons-993117 kubelet[1272]: I1101 09:01:04.997319    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ab1549a6-7022-4004-a68c-83db24e1c7e2-gcp-creds\") pod \"hello-world-app-5d498dc89-4l5gq\" (UID: \"ab1549a6-7022-4004-a68c-83db24e1c7e2\") " pod="default/hello-world-app-5d498dc89-4l5gq"
	Nov 01 09:01:04 addons-993117 kubelet[1272]: I1101 09:01:04.997406    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qp8h\" (UniqueName: \"kubernetes.io/projected/ab1549a6-7022-4004-a68c-83db24e1c7e2-kube-api-access-9qp8h\") pod \"hello-world-app-5d498dc89-4l5gq\" (UID: \"ab1549a6-7022-4004-a68c-83db24e1c7e2\") " pod="default/hello-world-app-5d498dc89-4l5gq"
	
	
	==> storage-provisioner [d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818] <==
	W1101 09:00:42.706523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:44.709563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:44.714785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:46.718526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:46.722436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:48.725492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:48.729656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:50.733280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:50.738247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:52.741277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:52.745110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:54.748555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:54.752330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:56.755809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:56.759636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:58.762532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:00:58.766111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:01:00.769524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:01:00.774427       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:01:02.777487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:01:02.782555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:01:04.785490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:01:04.789441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:01:06.792762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:01:06.798414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-993117 -n addons-993117
helpers_test.go:269: (dbg) Run:  kubectl --context addons-993117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-p6ghj ingress-nginx-admission-patch-t2bh9
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-993117 describe pod ingress-nginx-admission-create-p6ghj ingress-nginx-admission-patch-t2bh9
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-993117 describe pod ingress-nginx-admission-create-p6ghj ingress-nginx-admission-patch-t2bh9: exit status 1 (60.499007ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-p6ghj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-t2bh9" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-993117 describe pod ingress-nginx-admission-create-p6ghj ingress-nginx-admission-patch-t2bh9: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (253.857146ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:01:07.506220  124256 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:01:07.506499  124256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:01:07.506509  124256 out.go:374] Setting ErrFile to fd 2...
	I1101 09:01:07.506513  124256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:01:07.506733  124256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:01:07.507011  124256 mustload.go:66] Loading cluster: addons-993117
	I1101 09:01:07.507338  124256 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:01:07.507351  124256 addons.go:607] checking whether the cluster is paused
	I1101 09:01:07.507428  124256 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:01:07.507457  124256 host.go:66] Checking if "addons-993117" exists ...
	I1101 09:01:07.507822  124256 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 09:01:07.527976  124256 ssh_runner.go:195] Run: systemctl --version
	I1101 09:01:07.528053  124256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 09:01:07.546856  124256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 09:01:07.646706  124256 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:01:07.646777  124256 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:01:07.676590  124256 cri.go:89] found id: "f1654d9b2115490bef868b715b4de77679849692e4fe2d4ed953ddeff134b869"
	I1101 09:01:07.676621  124256 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 09:01:07.676628  124256 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 09:01:07.676633  124256 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 09:01:07.676637  124256 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 09:01:07.676643  124256 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 09:01:07.676647  124256 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 09:01:07.676650  124256 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 09:01:07.676652  124256 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 09:01:07.676658  124256 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 09:01:07.676661  124256 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 09:01:07.676684  124256 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 09:01:07.676703  124256 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 09:01:07.676709  124256 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 09:01:07.676713  124256 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 09:01:07.676732  124256 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 09:01:07.676746  124256 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 09:01:07.676752  124256 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 09:01:07.676756  124256 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 09:01:07.676760  124256 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 09:01:07.676764  124256 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 09:01:07.676768  124256 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 09:01:07.676770  124256 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 09:01:07.676773  124256 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 09:01:07.676775  124256 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 09:01:07.676777  124256 cri.go:89] found id: ""
	I1101 09:01:07.676831  124256 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:01:07.691621  124256 out.go:203] 
	W1101 09:01:07.692746  124256 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:01:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:01:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:01:07.692773  124256 out.go:285] * 
	* 
	W1101 09:01:07.695996  124256 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:01:07.697450  124256 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable ingress --alsologtostderr -v=1: exit status 11 (253.718546ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:01:07.760320  124316 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:01:07.760702  124316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:01:07.760717  124316 out.go:374] Setting ErrFile to fd 2...
	I1101 09:01:07.760723  124316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:01:07.761066  124316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:01:07.761424  124316 mustload.go:66] Loading cluster: addons-993117
	I1101 09:01:07.761779  124316 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:01:07.761794  124316 addons.go:607] checking whether the cluster is paused
	I1101 09:01:07.761874  124316 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:01:07.761890  124316 host.go:66] Checking if "addons-993117" exists ...
	I1101 09:01:07.762328  124316 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 09:01:07.780115  124316 ssh_runner.go:195] Run: systemctl --version
	I1101 09:01:07.780169  124316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 09:01:07.798080  124316 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 09:01:07.898987  124316 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:01:07.899075  124316 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:01:07.931740  124316 cri.go:89] found id: "f1654d9b2115490bef868b715b4de77679849692e4fe2d4ed953ddeff134b869"
	I1101 09:01:07.931782  124316 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 09:01:07.931789  124316 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 09:01:07.931794  124316 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 09:01:07.931798  124316 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 09:01:07.931803  124316 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 09:01:07.931807  124316 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 09:01:07.931813  124316 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 09:01:07.931817  124316 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 09:01:07.931837  124316 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 09:01:07.931845  124316 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 09:01:07.931849  124316 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 09:01:07.931854  124316 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 09:01:07.931858  124316 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 09:01:07.931863  124316 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 09:01:07.931885  124316 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 09:01:07.931894  124316 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 09:01:07.931905  124316 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 09:01:07.931936  124316 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 09:01:07.931944  124316 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 09:01:07.931948  124316 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 09:01:07.931953  124316 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 09:01:07.931957  124316 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 09:01:07.931961  124316 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 09:01:07.931965  124316 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 09:01:07.931969  124316 cri.go:89] found id: ""
	I1101 09:01:07.932033  124316 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:01:07.945809  124316 out.go:203] 
	W1101 09:01:07.947022  124316 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:01:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:01:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:01:07.947045  124316 out.go:285] * 
	* 
	W1101 09:01:07.950217  124316 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:01:07.951452  124316 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.89s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-92zrk" [081efc2c-76e0-45a7-9046-924810274608] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003392304s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (274.660641ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:50.519899  121527 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:50.520152  121527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:50.520161  121527 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:50.520165  121527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:50.520368  121527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:50.520634  121527 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:50.521019  121527 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:50.521037  121527 addons.go:607] checking whether the cluster is paused
	I1101 08:58:50.521119  121527 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:50.521136  121527 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:50.521516  121527 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:50.540952  121527 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:50.541015  121527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:50.560716  121527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:50.665240  121527 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:50.665349  121527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:50.701995  121527 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:50.702021  121527 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:50.702046  121527 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:50.702051  121527 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:50.702054  121527 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:50.702059  121527 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:50.702064  121527 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:50.702067  121527 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:50.702071  121527 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:50.702080  121527 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:50.702084  121527 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:50.702089  121527 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:50.702094  121527 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:50.702102  121527 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:50.702107  121527 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:50.702117  121527 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:50.702123  121527 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:50.702127  121527 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:50.702130  121527 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:50.702134  121527 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:50.702138  121527 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:50.702145  121527 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:50.702149  121527 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:50.702152  121527 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:50.702157  121527 cri.go:89] found id: ""
	I1101 08:58:50.702205  121527 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:50.719289  121527 out.go:203] 
	W1101 08:58:50.721203  121527 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:50.721247  121527 out.go:285] * 
	* 
	W1101 08:58:50.724774  121527 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:50.726404  121527 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.39s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 2.993698ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-xfvx6" [e043da64-ca2f-49e1-8af9-25be09cdb56b] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003617928s
addons_test.go:463: (dbg) Run:  kubectl --context addons-993117 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (299.425593ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:30.125861  119134 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:30.126638  119134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:30.126650  119134 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:30.126655  119134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:30.126871  119134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:30.127187  119134 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:30.127604  119134 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:30.127621  119134 addons.go:607] checking whether the cluster is paused
	I1101 08:58:30.127714  119134 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:30.127728  119134 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:30.128265  119134 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:30.152638  119134 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:30.152710  119134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:30.175276  119134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:30.285366  119134 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:30.285456  119134 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:30.322194  119134 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:30.322226  119134 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:30.322233  119134 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:30.322238  119134 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:30.322242  119134 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:30.322248  119134 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:30.322252  119134 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:30.322257  119134 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:30.322261  119134 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:30.322275  119134 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:30.322282  119134 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:30.322287  119134 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:30.322291  119134 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:30.322295  119134 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:30.322299  119134 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:30.322308  119134 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:30.322313  119134 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:30.322317  119134 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:30.322319  119134 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:30.322321  119134 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:30.322324  119134 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:30.322326  119134 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:30.322328  119134 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:30.322331  119134 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:30.322333  119134 cri.go:89] found id: ""
	I1101 08:58:30.322370  119134 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:30.338818  119134 out.go:203] 
	W1101 08:58:30.340409  119134 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:30.340431  119134 out.go:285] * 
	* 
	W1101 08:58:30.344173  119134 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:30.345954  119134 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (26.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1101 08:58:24.968056  107955 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.290035ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-993117 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-993117 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c4c6f6ea-3eb2-46a4-b5f8-0f0da9a8bb23] Pending
helpers_test.go:352: "task-pv-pod" [c4c6f6ea-3eb2-46a4-b5f8-0f0da9a8bb23] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c4c6f6ea-3eb2-46a4-b5f8-0f0da9a8bb23] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004205954s
addons_test.go:572: (dbg) Run:  kubectl --context addons-993117 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-993117 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-993117 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-993117 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-993117 delete pod task-pv-pod: (1.121307894s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-993117 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-993117 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
2025/11/01 08:58:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-993117 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [a3db82fb-690a-41c3-bc64-e2aa99edfea2] Pending
helpers_test.go:352: "task-pv-pod-restore" [a3db82fb-690a-41c3-bc64-e2aa99edfea2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [a3db82fb-690a-41c3-bc64-e2aa99edfea2] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.00354829s
addons_test.go:614: (dbg) Run:  kubectl --context addons-993117 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-993117 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-993117 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (251.545593ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:51.066413  121728 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:51.066507  121728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:51.066511  121728 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:51.066515  121728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:51.066730  121728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:51.067005  121728 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:51.067394  121728 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:51.067410  121728 addons.go:607] checking whether the cluster is paused
	I1101 08:58:51.067501  121728 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:51.067517  121728 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:51.067884  121728 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:51.086369  121728 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:51.086428  121728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:51.104752  121728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:51.206229  121728 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:51.206326  121728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:51.236643  121728 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:51.236666  121728 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:51.236671  121728 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:51.236683  121728 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:51.236687  121728 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:51.236693  121728 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:51.236698  121728 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:51.236702  121728 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:51.236707  121728 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:51.236716  121728 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:51.236720  121728 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:51.236725  121728 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:51.236729  121728 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:51.236734  121728 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:51.236738  121728 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:51.236750  121728 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:51.236758  121728 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:51.236762  121728 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:51.236766  121728 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:51.236769  121728 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:51.236776  121728 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:51.236779  121728 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:51.236782  121728 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:51.236785  121728 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:51.236789  121728 cri.go:89] found id: ""
	I1101 08:58:51.236840  121728 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:51.252263  121728 out.go:203] 
	W1101 08:58:51.253850  121728 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:51.253874  121728 out.go:285] * 
	* 
	W1101 08:58:51.257076  121728 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:51.258733  121728 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (258.08442ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:51.321369  121804 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:51.321659  121804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:51.321670  121804 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:51.321674  121804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:51.321893  121804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:51.322209  121804 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:51.322636  121804 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:51.322655  121804 addons.go:607] checking whether the cluster is paused
	I1101 08:58:51.322742  121804 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:51.322755  121804 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:51.323200  121804 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:51.341681  121804 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:51.341757  121804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:51.360091  121804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:51.459884  121804 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:51.459989  121804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:51.495755  121804 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:51.495784  121804 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:51.495791  121804 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:51.495796  121804 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:51.495801  121804 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:51.495806  121804 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:51.495810  121804 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:51.495814  121804 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:51.495824  121804 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:51.495836  121804 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:51.495844  121804 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:51.495848  121804 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:51.495853  121804 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:51.495860  121804 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:51.495864  121804 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:51.495879  121804 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:51.495884  121804 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:51.495888  121804 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:51.495891  121804 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:51.495893  121804 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:51.495895  121804 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:51.495898  121804 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:51.495901  121804 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:51.495903  121804 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:51.495906  121804 cri.go:89] found id: ""
	I1101 08:58:51.495968  121804 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:51.511118  121804 out.go:203] 
	W1101 08:58:51.512290  121804 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:51.512317  121804 out.go:285] * 
	* 
	W1101 08:58:51.516085  121804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:51.517367  121804 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (26.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-993117 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-993117 --alsologtostderr -v=1: exit status 11 (264.31343ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:25.030398  118208 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:25.030698  118208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:25.030711  118208 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:25.030715  118208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:25.030952  118208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:25.031282  118208 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:25.031822  118208 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:25.031846  118208 addons.go:607] checking whether the cluster is paused
	I1101 08:58:25.031990  118208 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:25.032017  118208 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:25.032418  118208 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:25.053494  118208 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:25.053562  118208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:25.073488  118208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:25.174025  118208 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:25.174119  118208 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:25.204035  118208 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:25.204065  118208 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:25.204068  118208 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:25.204071  118208 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:25.204074  118208 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:25.204077  118208 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:25.204079  118208 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:25.204082  118208 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:25.204084  118208 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:25.204090  118208 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:25.204092  118208 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:25.204094  118208 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:25.204096  118208 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:25.204098  118208 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:25.204101  118208 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:25.204107  118208 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:25.204111  118208 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:25.204117  118208 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:25.204121  118208 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:25.204124  118208 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:25.204128  118208 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:25.204131  118208 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:25.204134  118208 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:25.204138  118208 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:25.204142  118208 cri.go:89] found id: ""
	I1101 08:58:25.204186  118208 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:25.218823  118208 out.go:203] 
	W1101 08:58:25.220113  118208 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:25.220133  118208 out.go:285] * 
	* 
	W1101 08:58:25.223245  118208 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:25.224675  118208 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-993117 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-993117
helpers_test.go:243: (dbg) docker inspect addons-993117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496",
	        "Created": "2025-11-01T08:55:53.852267328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 109978,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T08:55:53.895628926Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496/hostname",
	        "HostsPath": "/var/lib/docker/containers/d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496/hosts",
	        "LogPath": "/var/lib/docker/containers/d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496/d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496-json.log",
	        "Name": "/addons-993117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-993117:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-993117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d9e4415568e0bbe95169c0b08619823e4afd5e788a84a7ca5189210da1b5f496",
	                "LowerDir": "/var/lib/docker/overlay2/af5c0e70df95d6a75973586a74737e4442c6f0678defcfe4d83d43df8f4390b2-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af5c0e70df95d6a75973586a74737e4442c6f0678defcfe4d83d43df8f4390b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af5c0e70df95d6a75973586a74737e4442c6f0678defcfe4d83d43df8f4390b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af5c0e70df95d6a75973586a74737e4442c6f0678defcfe4d83d43df8f4390b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-993117",
	                "Source": "/var/lib/docker/volumes/addons-993117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-993117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-993117",
	                "name.minikube.sigs.k8s.io": "addons-993117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "f59be55a36e79acb763b4b6bca255d53a6a1ad9a75ad0e25ed66f87587a6a830",
	            "SandboxKey": "/var/run/docker/netns/f59be55a36e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-993117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:cf:a3:0f:b1:aa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "45cd133f52f80206aac8969a8f74f81258b1da37ef0e39e860a4b8aff91aaab7",
	                    "EndpointID": "0b98a5a37bde8d1a251ed5051bbf98f524bc98ecec24e832d99989cc2c032807",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-993117",
	                        "d9e4415568e0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-993117 -n addons-993117
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-993117 logs -n 25: (1.179126351s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-998424 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-998424   │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:55 UTC │
	│ delete  │ -p download-only-998424                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-998424   │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:55 UTC │
	│ start   │ -o=json --download-only -p download-only-701138 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-701138   │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:55 UTC │
	│ delete  │ -p download-only-701138                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-701138   │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:55 UTC │
	│ delete  │ -p download-only-998424                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-998424   │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:55 UTC │
	│ delete  │ -p download-only-701138                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-701138   │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:55 UTC │
	│ start   │ --download-only -p download-docker-005556 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-005556 │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │                     │
	│ delete  │ -p download-docker-005556                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-005556 │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:55 UTC │
	│ start   │ --download-only -p binary-mirror-394939 --alsologtostderr --binary-mirror http://127.0.0.1:41257 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-394939   │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │                     │
	│ delete  │ -p binary-mirror-394939                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-394939   │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:55 UTC │
	│ addons  │ disable dashboard -p addons-993117                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-993117          │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │                     │
	│ addons  │ enable dashboard -p addons-993117                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-993117          │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │                     │
	│ start   │ -p addons-993117 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-993117          │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:58 UTC │
	│ addons  │ addons-993117 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-993117          │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ addons-993117 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-993117          │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	│ addons  │ enable headlamp -p addons-993117 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-993117          │ jenkins │ v1.37.0 │ 01 Nov 25 08:58 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:55:32
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:55:32.548812  109339 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:55:32.549104  109339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:55:32.549115  109339 out.go:374] Setting ErrFile to fd 2...
	I1101 08:55:32.549119  109339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:55:32.549340  109339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:55:32.549849  109339 out.go:368] Setting JSON to false
	I1101 08:55:32.550794  109339 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2271,"bootTime":1761985062,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:55:32.550898  109339 start.go:143] virtualization: kvm guest
	I1101 08:55:32.552588  109339 out.go:179] * [addons-993117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:55:32.553717  109339 notify.go:221] Checking for updates...
	I1101 08:55:32.553766  109339 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 08:55:32.554800  109339 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:55:32.555942  109339 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 08:55:32.557139  109339 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 08:55:32.558247  109339 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 08:55:32.559237  109339 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 08:55:32.560357  109339 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:55:32.583557  109339 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 08:55:32.583728  109339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:55:32.643823  109339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-01 08:55:32.632728071 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:55:32.643942  109339 docker.go:319] overlay module found
	I1101 08:55:32.645813  109339 out.go:179] * Using the docker driver based on user configuration
	I1101 08:55:32.647067  109339 start.go:309] selected driver: docker
	I1101 08:55:32.647086  109339 start.go:930] validating driver "docker" against <nil>
	I1101 08:55:32.647097  109339 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 08:55:32.647606  109339 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:55:32.709308  109339 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-01 08:55:32.698413994 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:55:32.709477  109339 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:55:32.709675  109339 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:55:32.711307  109339 out.go:179] * Using Docker driver with root privileges
	I1101 08:55:32.714031  109339 cni.go:84] Creating CNI manager for ""
	I1101 08:55:32.714119  109339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:55:32.714135  109339 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 08:55:32.714258  109339 start.go:353] cluster config:
	{Name:addons-993117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-993117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1101 08:55:32.715607  109339 out.go:179] * Starting "addons-993117" primary control-plane node in "addons-993117" cluster
	I1101 08:55:32.716636  109339 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 08:55:32.717887  109339 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 08:55:32.718837  109339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:55:32.718877  109339 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 08:55:32.718893  109339 cache.go:59] Caching tarball of preloaded images
	I1101 08:55:32.718926  109339 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 08:55:32.719007  109339 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 08:55:32.719024  109339 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 08:55:32.719429  109339 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/config.json ...
	I1101 08:55:32.719462  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/config.json: {Name:mk7fb1382f374dec11d4a262e2754219dc35c482 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:32.735898  109339 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:55:32.736045  109339 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 08:55:32.736068  109339 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 08:55:32.736074  109339 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 08:55:32.736087  109339 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 08:55:32.736095  109339 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1101 08:55:45.495609  109339 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1101 08:55:45.495660  109339 cache.go:233] Successfully downloaded all kic artifacts
	I1101 08:55:45.495705  109339 start.go:360] acquireMachinesLock for addons-993117: {Name:mkba6252113cec7e55aec81713c4f8d8e7b23cec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 08:55:45.495820  109339 start.go:364] duration metric: took 90.983µs to acquireMachinesLock for "addons-993117"
	I1101 08:55:45.495854  109339 start.go:93] Provisioning new machine with config: &{Name:addons-993117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-993117 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:55:45.495965  109339 start.go:125] createHost starting for "" (driver="docker")
	I1101 08:55:45.497661  109339 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1101 08:55:45.497885  109339 start.go:159] libmachine.API.Create for "addons-993117" (driver="docker")
	I1101 08:55:45.497926  109339 client.go:173] LocalClient.Create starting
	I1101 08:55:45.498037  109339 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem
	I1101 08:55:45.883103  109339 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem
	I1101 08:55:46.075456  109339 cli_runner.go:164] Run: docker network inspect addons-993117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 08:55:46.092641  109339 cli_runner.go:211] docker network inspect addons-993117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 08:55:46.092716  109339 network_create.go:284] running [docker network inspect addons-993117] to gather additional debugging logs...
	I1101 08:55:46.092734  109339 cli_runner.go:164] Run: docker network inspect addons-993117
	W1101 08:55:46.110331  109339 cli_runner.go:211] docker network inspect addons-993117 returned with exit code 1
	I1101 08:55:46.110363  109339 network_create.go:287] error running [docker network inspect addons-993117]: docker network inspect addons-993117: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-993117 not found
	I1101 08:55:46.110387  109339 network_create.go:289] output of [docker network inspect addons-993117]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-993117 not found
	
	** /stderr **
	I1101 08:55:46.110492  109339 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:55:46.128565  109339 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0024a8830}
	I1101 08:55:46.128604  109339 network_create.go:124] attempt to create docker network addons-993117 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1101 08:55:46.128666  109339 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-993117 addons-993117
	I1101 08:55:46.187525  109339 network_create.go:108] docker network addons-993117 192.168.49.0/24 created
	I1101 08:55:46.187558  109339 kic.go:121] calculated static IP "192.168.49.2" for the "addons-993117" container
	I1101 08:55:46.187625  109339 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 08:55:46.204134  109339 cli_runner.go:164] Run: docker volume create addons-993117 --label name.minikube.sigs.k8s.io=addons-993117 --label created_by.minikube.sigs.k8s.io=true
	I1101 08:55:46.223136  109339 oci.go:103] Successfully created a docker volume addons-993117
	I1101 08:55:46.223220  109339 cli_runner.go:164] Run: docker run --rm --name addons-993117-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-993117 --entrypoint /usr/bin/test -v addons-993117:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 08:55:49.524867  109339 cli_runner.go:217] Completed: docker run --rm --name addons-993117-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-993117 --entrypoint /usr/bin/test -v addons-993117:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (3.301598701s)
	I1101 08:55:49.524900  109339 oci.go:107] Successfully prepared a docker volume addons-993117
	I1101 08:55:49.524945  109339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:55:49.524972  109339 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 08:55:49.525048  109339 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-993117:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1101 08:55:53.776330  109339 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-993117:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.251239922s)
	I1101 08:55:53.776368  109339 kic.go:203] duration metric: took 4.251390537s to extract preloaded images to volume ...
	W1101 08:55:53.776462  109339 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 08:55:53.776497  109339 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 08:55:53.776536  109339 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 08:55:53.835129  109339 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-993117 --name addons-993117 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-993117 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-993117 --network addons-993117 --ip 192.168.49.2 --volume addons-993117:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 08:55:54.139942  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Running}}
	I1101 08:55:54.158837  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:55:54.176287  109339 cli_runner.go:164] Run: docker exec addons-993117 stat /var/lib/dpkg/alternatives/iptables
	I1101 08:55:54.218769  109339 oci.go:144] the created container "addons-993117" has a running status.
	I1101 08:55:54.218802  109339 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa...
	I1101 08:55:54.331223  109339 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 08:55:54.356976  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:55:54.379027  109339 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 08:55:54.379054  109339 kic_runner.go:114] Args: [docker exec --privileged addons-993117 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 08:55:54.421059  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:55:54.447107  109339 machine.go:94] provisionDockerMachine start ...
	I1101 08:55:54.447234  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:54.470450  109339 main.go:143] libmachine: Using SSH client type: native
	I1101 08:55:54.470773  109339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:55:54.470795  109339 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 08:55:54.617800  109339 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-993117
	
	I1101 08:55:54.617850  109339 ubuntu.go:182] provisioning hostname "addons-993117"
	I1101 08:55:54.617928  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:54.637045  109339 main.go:143] libmachine: Using SSH client type: native
	I1101 08:55:54.637264  109339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:55:54.637278  109339 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-993117 && echo "addons-993117" | sudo tee /etc/hostname
	I1101 08:55:54.788367  109339 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-993117
	
	I1101 08:55:54.788450  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:54.808243  109339 main.go:143] libmachine: Using SSH client type: native
	I1101 08:55:54.808546  109339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:55:54.808575  109339 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-993117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-993117/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-993117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 08:55:54.952950  109339 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 08:55:54.952986  109339 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 08:55:54.953032  109339 ubuntu.go:190] setting up certificates
	I1101 08:55:54.953045  109339 provision.go:84] configureAuth start
	I1101 08:55:54.953104  109339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-993117
	I1101 08:55:54.970715  109339 provision.go:143] copyHostCerts
	I1101 08:55:54.970784  109339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 08:55:54.970893  109339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 08:55:54.970991  109339 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 08:55:54.971052  109339 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.addons-993117 san=[127.0.0.1 192.168.49.2 addons-993117 localhost minikube]
	I1101 08:55:55.676163  109339 provision.go:177] copyRemoteCerts
	I1101 08:55:55.676225  109339 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 08:55:55.676260  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:55.694218  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:55:55.795276  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 08:55:55.814237  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1101 08:55:55.831285  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 08:55:55.848890  109339 provision.go:87] duration metric: took 895.830777ms to configureAuth
	I1101 08:55:55.848940  109339 ubuntu.go:206] setting minikube options for container-runtime
	I1101 08:55:55.849104  109339 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:55:55.849203  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:55.867410  109339 main.go:143] libmachine: Using SSH client type: native
	I1101 08:55:55.867637  109339 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1101 08:55:55.867656  109339 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 08:55:56.120652  109339 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 08:55:56.120675  109339 machine.go:97] duration metric: took 1.673534029s to provisionDockerMachine
	I1101 08:55:56.120686  109339 client.go:176] duration metric: took 10.622750185s to LocalClient.Create
	I1101 08:55:56.120705  109339 start.go:167] duration metric: took 10.622822454s to libmachine.API.Create "addons-993117"
	I1101 08:55:56.120711  109339 start.go:293] postStartSetup for "addons-993117" (driver="docker")
	I1101 08:55:56.120721  109339 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 08:55:56.120793  109339 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 08:55:56.120842  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:56.138699  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:55:56.240668  109339 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 08:55:56.244416  109339 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 08:55:56.244440  109339 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 08:55:56.244456  109339 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 08:55:56.244528  109339 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 08:55:56.244556  109339 start.go:296] duration metric: took 123.838767ms for postStartSetup
	I1101 08:55:56.244853  109339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-993117
	I1101 08:55:56.261843  109339 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/config.json ...
	I1101 08:55:56.262115  109339 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 08:55:56.262159  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:56.279569  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:55:56.376609  109339 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 08:55:56.381149  109339 start.go:128] duration metric: took 10.885168616s to createHost
	I1101 08:55:56.381173  109339 start.go:83] releasing machines lock for "addons-993117", held for 10.885336124s
	I1101 08:55:56.381250  109339 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-993117
	I1101 08:55:56.398440  109339 ssh_runner.go:195] Run: cat /version.json
	I1101 08:55:56.398489  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:56.398510  109339 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 08:55:56.398596  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:55:56.416778  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:55:56.416983  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:55:56.567578  109339 ssh_runner.go:195] Run: systemctl --version
	I1101 08:55:56.574026  109339 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 08:55:56.608789  109339 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 08:55:56.613704  109339 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 08:55:56.613758  109339 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 08:55:56.640216  109339 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 08:55:56.640237  109339 start.go:496] detecting cgroup driver to use...
	I1101 08:55:56.640268  109339 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 08:55:56.640306  109339 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 08:55:56.656181  109339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 08:55:56.669399  109339 docker.go:218] disabling cri-docker service (if available) ...
	I1101 08:55:56.669457  109339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 08:55:56.686093  109339 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 08:55:56.704236  109339 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 08:55:56.788975  109339 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 08:55:56.876182  109339 docker.go:234] disabling docker service ...
	I1101 08:55:56.876252  109339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 08:55:56.895382  109339 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 08:55:56.907775  109339 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 08:55:56.991686  109339 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 08:55:57.072407  109339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 08:55:57.085082  109339 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 08:55:57.099153  109339 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 08:55:57.099212  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.109844  109339 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 08:55:57.109942  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.118858  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.127480  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.136400  109339 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 08:55:57.144552  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.153193  109339 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.167013  109339 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 08:55:57.175736  109339 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 08:55:57.183513  109339 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 08:55:57.191293  109339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:55:57.268236  109339 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 08:55:57.373121  109339 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 08:55:57.373199  109339 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 08:55:57.377273  109339 start.go:564] Will wait 60s for crictl version
	I1101 08:55:57.377333  109339 ssh_runner.go:195] Run: which crictl
	I1101 08:55:57.380973  109339 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 08:55:57.404589  109339 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 08:55:57.404671  109339 ssh_runner.go:195] Run: crio --version
	I1101 08:55:57.434048  109339 ssh_runner.go:195] Run: crio --version
	I1101 08:55:57.463251  109339 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 08:55:57.464436  109339 cli_runner.go:164] Run: docker network inspect addons-993117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 08:55:57.482346  109339 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1101 08:55:57.486490  109339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:55:57.497024  109339 kubeadm.go:884] updating cluster {Name:addons-993117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-993117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 08:55:57.497139  109339 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:55:57.497187  109339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:55:57.530509  109339 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:55:57.530530  109339 crio.go:433] Images already preloaded, skipping extraction
	I1101 08:55:57.530575  109339 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 08:55:57.556161  109339 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 08:55:57.556183  109339 cache_images.go:86] Images are preloaded, skipping loading
	I1101 08:55:57.556192  109339 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1101 08:55:57.556279  109339 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-993117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-993117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 08:55:57.556340  109339 ssh_runner.go:195] Run: crio config
	I1101 08:55:57.600523  109339 cni.go:84] Creating CNI manager for ""
	I1101 08:55:57.600544  109339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:55:57.600558  109339 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 08:55:57.600586  109339 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-993117 NodeName:addons-993117 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 08:55:57.600804  109339 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-993117"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 08:55:57.600881  109339 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 08:55:57.609081  109339 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 08:55:57.609162  109339 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 08:55:57.617309  109339 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1101 08:55:57.630535  109339 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 08:55:57.645812  109339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1101 08:55:57.658699  109339 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1101 08:55:57.662479  109339 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 08:55:57.672963  109339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:55:57.752686  109339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 08:55:57.778854  109339 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117 for IP: 192.168.49.2
	I1101 08:55:57.778875  109339 certs.go:195] generating shared ca certs ...
	I1101 08:55:57.778898  109339 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:57.779041  109339 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 08:55:57.911935  109339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt ...
	I1101 08:55:57.911972  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt: {Name:mk8da0f06e8b560623b0b57274ff3cad3668f0e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:57.912175  109339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key ...
	I1101 08:55:57.912188  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key: {Name:mkbe4b7e166b5cfbcf8ea62c6168fd9056b2e3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:57.912267  109339 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 08:55:58.146928  109339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt ...
	I1101 08:55:58.146963  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt: {Name:mk8c15fe379a589af8cda80c274386f0bd2927a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.147147  109339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key ...
	I1101 08:55:58.147159  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key: {Name:mk701633aa810a9fbee56cdd65787d539763830b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.147236  109339 certs.go:257] generating profile certs ...
	I1101 08:55:58.147300  109339 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.key
	I1101 08:55:58.147313  109339 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt with IP's: []
	I1101 08:55:58.418736  109339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt ...
	I1101 08:55:58.418772  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: {Name:mk27b8a3fc9889b3dd3cc67551cb7036fe84c509 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.418974  109339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.key ...
	I1101 08:55:58.418988  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.key: {Name:mk613e2d496dbfd1ae4d809fe3dbd7ff2f66063c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.419082  109339 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key.2272afb2
	I1101 08:55:58.419106  109339 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt.2272afb2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1101 08:55:58.576667  109339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt.2272afb2 ...
	I1101 08:55:58.576703  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt.2272afb2: {Name:mkd5180b2eaa5b25dc89f2ecbf3d185e57d7f5c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.576882  109339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key.2272afb2 ...
	I1101 08:55:58.576896  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key.2272afb2: {Name:mka9620dde79f2b51f059a80ca0cc74f82891745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.576981  109339 certs.go:382] copying /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt.2272afb2 -> /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt
	I1101 08:55:58.577063  109339 certs.go:386] copying /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key.2272afb2 -> /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key
	I1101 08:55:58.577117  109339 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.key
	I1101 08:55:58.577137  109339 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.crt with IP's: []
	I1101 08:55:58.660197  109339 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.crt ...
	I1101 08:55:58.660229  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.crt: {Name:mk740bdd7ee89526666c36fdfaf7b64d1105174e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.660404  109339 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.key ...
	I1101 08:55:58.660418  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.key: {Name:mkb32fc47490f1ea22195e9a3e4051fda68db6f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:55:58.660593  109339 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 08:55:58.660628  109339 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 08:55:58.660649  109339 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 08:55:58.660668  109339 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 08:55:58.661292  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 08:55:58.680929  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 08:55:58.699726  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 08:55:58.718882  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 08:55:58.737421  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1101 08:55:58.755343  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 08:55:58.773698  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 08:55:58.791195  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 08:55:58.808968  109339 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 08:55:58.828998  109339 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 08:55:58.842236  109339 ssh_runner.go:195] Run: openssl version
	I1101 08:55:58.848317  109339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 08:55:58.859505  109339 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:55:58.863453  109339 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:55:58.863513  109339 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 08:55:58.898087  109339 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 08:55:58.906971  109339 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 08:55:58.910997  109339 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 08:55:58.911051  109339 kubeadm.go:401] StartCluster: {Name:addons-993117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-993117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:55:58.911122  109339 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:55:58.911166  109339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:55:58.938532  109339 cri.go:89] found id: ""
	I1101 08:55:58.938605  109339 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 08:55:58.946744  109339 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 08:55:58.954806  109339 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 08:55:58.954864  109339 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 08:55:58.962484  109339 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 08:55:58.962506  109339 kubeadm.go:158] found existing configuration files:
	
	I1101 08:55:58.962545  109339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 08:55:58.970036  109339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 08:55:58.970092  109339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 08:55:58.978180  109339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 08:55:58.985626  109339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 08:55:58.985788  109339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 08:55:58.993560  109339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 08:55:59.001050  109339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 08:55:59.001106  109339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 08:55:59.008439  109339 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 08:55:59.016068  109339 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 08:55:59.016135  109339 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 08:55:59.023594  109339 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 08:55:59.060542  109339 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 08:55:59.060591  109339 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 08:55:59.083090  109339 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 08:55:59.083204  109339 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 08:55:59.083268  109339 kubeadm.go:319] OS: Linux
	I1101 08:55:59.083327  109339 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 08:55:59.083395  109339 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 08:55:59.083475  109339 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 08:55:59.083552  109339 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 08:55:59.083633  109339 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 08:55:59.083744  109339 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 08:55:59.083826  109339 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 08:55:59.083891  109339 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 08:55:59.142301  109339 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 08:55:59.142462  109339 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 08:55:59.142607  109339 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 08:55:59.151028  109339 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1101 08:55:59.153338  109339 out.go:252]   - Generating certificates and keys ...
	I1101 08:55:59.153443  109339 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 08:55:59.153569  109339 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 08:55:59.330456  109339 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 08:55:59.455187  109339 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 08:55:59.553416  109339 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 08:55:59.886967  109339 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 08:56:00.075643  109339 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 08:56:00.075804  109339 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-993117 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:56:00.606652  109339 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 08:56:00.606822  109339 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-993117 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1101 08:56:00.743553  109339 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 08:56:00.879200  109339 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 08:56:01.038748  109339 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 08:56:01.038824  109339 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 08:56:01.170839  109339 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 08:56:01.309080  109339 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 08:56:01.430203  109339 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 08:56:01.534382  109339 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 08:56:01.660318  109339 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 08:56:01.660764  109339 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 08:56:01.664665  109339 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1101 08:56:01.666168  109339 out.go:252]   - Booting up control plane ...
	I1101 08:56:01.666355  109339 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 08:56:01.666480  109339 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 08:56:01.667123  109339 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 08:56:01.680973  109339 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 08:56:01.681114  109339 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 08:56:01.688415  109339 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 08:56:01.688699  109339 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 08:56:01.688755  109339 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 08:56:01.786168  109339 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 08:56:01.786318  109339 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 08:56:03.287208  109339 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501173534s
	I1101 08:56:03.291134  109339 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 08:56:03.291227  109339 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1101 08:56:03.291311  109339 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 08:56:03.291399  109339 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 08:56:04.314456  109339 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.023227837s
	I1101 08:56:05.377779  109339 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.086605121s
	I1101 08:56:07.293368  109339 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002135105s
	I1101 08:56:07.305463  109339 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 08:56:07.317546  109339 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 08:56:07.327364  109339 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 08:56:07.327692  109339 kubeadm.go:319] [mark-control-plane] Marking the node addons-993117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 08:56:07.335856  109339 kubeadm.go:319] [bootstrap-token] Using token: xs4pqr.4gc1opr1rfh0byc9
	I1101 08:56:07.338572  109339 out.go:252]   - Configuring RBAC rules ...
	I1101 08:56:07.338710  109339 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 08:56:07.341965  109339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 08:56:07.348055  109339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 08:56:07.350879  109339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 08:56:07.353563  109339 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 08:56:07.357855  109339 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 08:56:07.700122  109339 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 08:56:08.119213  109339 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 08:56:08.699456  109339 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 08:56:08.700349  109339 kubeadm.go:319] 
	I1101 08:56:08.700433  109339 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 08:56:08.700444  109339 kubeadm.go:319] 
	I1101 08:56:08.700545  109339 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 08:56:08.700570  109339 kubeadm.go:319] 
	I1101 08:56:08.700619  109339 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 08:56:08.700691  109339 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 08:56:08.700748  109339 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 08:56:08.700758  109339 kubeadm.go:319] 
	I1101 08:56:08.700843  109339 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 08:56:08.700852  109339 kubeadm.go:319] 
	I1101 08:56:08.700945  109339 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 08:56:08.700954  109339 kubeadm.go:319] 
	I1101 08:56:08.701031  109339 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 08:56:08.701139  109339 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 08:56:08.701203  109339 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 08:56:08.701209  109339 kubeadm.go:319] 
	I1101 08:56:08.701278  109339 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 08:56:08.701354  109339 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 08:56:08.701360  109339 kubeadm.go:319] 
	I1101 08:56:08.701433  109339 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xs4pqr.4gc1opr1rfh0byc9 \
	I1101 08:56:08.701522  109339 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 \
	I1101 08:56:08.701542  109339 kubeadm.go:319] 	--control-plane 
	I1101 08:56:08.701548  109339 kubeadm.go:319] 
	I1101 08:56:08.701665  109339 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 08:56:08.701680  109339 kubeadm.go:319] 
	I1101 08:56:08.701759  109339 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xs4pqr.4gc1opr1rfh0byc9 \
	I1101 08:56:08.701870  109339 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 
	I1101 08:56:08.704271  109339 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 08:56:08.704441  109339 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 08:56:08.704468  109339 cni.go:84] Creating CNI manager for ""
	I1101 08:56:08.704478  109339 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:56:08.706289  109339 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 08:56:08.707555  109339 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 08:56:08.711705  109339 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 08:56:08.711728  109339 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 08:56:08.725308  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 08:56:08.933409  109339 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 08:56:08.933497  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:08.933562  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-993117 minikube.k8s.io/updated_at=2025_11_01T08_56_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=addons-993117 minikube.k8s.io/primary=true
	I1101 08:56:09.027961  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:09.027960  109339 ops.go:34] apiserver oom_adj: -16
	I1101 08:56:09.528080  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:10.028465  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:10.528117  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:11.028718  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:11.528995  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:12.028884  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:12.528772  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:13.028882  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:13.529047  109339 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 08:56:13.598421  109339 kubeadm.go:1114] duration metric: took 4.664986368s to wait for elevateKubeSystemPrivileges
	I1101 08:56:13.598455  109339 kubeadm.go:403] duration metric: took 14.687408886s to StartCluster
	I1101 08:56:13.598475  109339 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:56:13.598579  109339 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 08:56:13.599018  109339 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 08:56:13.599210  109339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 08:56:13.599222  109339 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 08:56:13.599300  109339 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1101 08:56:13.599431  109339 addons.go:70] Setting yakd=true in profile "addons-993117"
	I1101 08:56:13.599444  109339 addons.go:70] Setting inspektor-gadget=true in profile "addons-993117"
	I1101 08:56:13.599467  109339 addons.go:70] Setting metrics-server=true in profile "addons-993117"
	I1101 08:56:13.599477  109339 addons.go:239] Setting addon inspektor-gadget=true in "addons-993117"
	I1101 08:56:13.599482  109339 addons.go:239] Setting addon metrics-server=true in "addons-993117"
	I1101 08:56:13.599475  109339 addons.go:70] Setting default-storageclass=true in profile "addons-993117"
	I1101 08:56:13.599504  109339 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-993117"
	I1101 08:56:13.599513  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.599459  109339 addons.go:239] Setting addon yakd=true in "addons-993117"
	I1101 08:56:13.599519  109339 addons.go:70] Setting gcp-auth=true in profile "addons-993117"
	I1101 08:56:13.599535  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.599536  109339 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-993117"
	I1101 08:56:13.599543  109339 addons.go:70] Setting ingress-dns=true in profile "addons-993117"
	I1101 08:56:13.599551  109339 mustload.go:66] Loading cluster: addons-993117
	I1101 08:56:13.599562  109339 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-993117"
	I1101 08:56:13.599562  109339 addons.go:239] Setting addon ingress-dns=true in "addons-993117"
	I1101 08:56:13.599596  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.599524  109339 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:56:13.599515  109339 addons.go:70] Setting cloud-spanner=true in profile "addons-993117"
	I1101 08:56:13.600140  109339 addons.go:239] Setting addon cloud-spanner=true in "addons-993117"
	I1101 08:56:13.600173  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.600357  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.600435  109339 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-993117"
	I1101 08:56:13.600453  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.600477  109339 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-993117"
	I1101 08:56:13.600486  109339 addons.go:70] Setting registry=true in profile "addons-993117"
	I1101 08:56:13.600493  109339 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-993117"
	I1101 08:56:13.600498  109339 addons.go:239] Setting addon registry=true in "addons-993117"
	I1101 08:56:13.599498  109339 addons.go:70] Setting ingress=true in profile "addons-993117"
	I1101 08:56:13.600512  109339 addons.go:239] Setting addon ingress=true in "addons-993117"
	I1101 08:56:13.600520  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.600498  109339 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-993117"
	I1101 08:56:13.600536  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.600549  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.601099  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.601102  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.601166  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.601991  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.602551  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.603137  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.599515  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.599633  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.602091  109339 addons.go:70] Setting registry-creds=true in profile "addons-993117"
	I1101 08:56:13.603434  109339 addons.go:239] Setting addon registry-creds=true in "addons-993117"
	I1101 08:56:13.603477  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.602120  109339 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:56:13.602338  109339 addons.go:70] Setting volumesnapshots=true in profile "addons-993117"
	I1101 08:56:13.603757  109339 addons.go:239] Setting addon volumesnapshots=true in "addons-993117"
	I1101 08:56:13.603788  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.602352  109339 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-993117"
	I1101 08:56:13.603965  109339 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-993117"
	I1101 08:56:13.602362  109339 addons.go:70] Setting volcano=true in profile "addons-993117"
	I1101 08:56:13.604093  109339 addons.go:239] Setting addon volcano=true in "addons-993117"
	I1101 08:56:13.604119  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.604592  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.604654  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.600522  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.605727  109339 addons.go:70] Setting storage-provisioner=true in profile "addons-993117"
	I1101 08:56:13.605787  109339 addons.go:239] Setting addon storage-provisioner=true in "addons-993117"
	I1101 08:56:13.605838  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.606079  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.606787  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.607159  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.607297  109339 out.go:179] * Verifying Kubernetes components...
	I1101 08:56:13.609067  109339 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 08:56:13.614389  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.614417  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.615008  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.630184  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.661338  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1101 08:56:13.663197  109339 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1101 08:56:13.663288  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1101 08:56:13.668491  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1101 08:56:13.668583  109339 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:56:13.670085  109339 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1101 08:56:13.671151  109339 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:56:13.671315  109339 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1101 08:56:13.671331  109339 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1101 08:56:13.671407  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.671677  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1101 08:56:13.672569  109339 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:56:13.672588  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1101 08:56:13.672642  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.673509  109339 addons.go:239] Setting addon default-storageclass=true in "addons-993117"
	I1101 08:56:13.673602  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.674291  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.674387  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1101 08:56:13.675431  109339 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1101 08:56:13.676388  109339 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:56:13.676407  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1101 08:56:13.676480  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.678032  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1101 08:56:13.679325  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1101 08:56:13.680928  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1101 08:56:13.682177  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1101 08:56:13.682199  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1101 08:56:13.682344  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.682568  109339 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1101 08:56:13.683369  109339 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1101 08:56:13.684777  109339 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:56:13.684797  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1101 08:56:13.684861  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.687355  109339 out.go:179]   - Using image docker.io/registry:3.0.0
	I1101 08:56:13.687791  109339 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1101 08:56:13.688518  109339 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1101 08:56:13.688539  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1101 08:56:13.688622  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.689185  109339 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1101 08:56:13.689203  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1101 08:56:13.689271  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.698012  109339 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1101 08:56:13.701538  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1101 08:56:13.701568  109339 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1101 08:56:13.701647  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	W1101 08:56:13.703899  109339 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1101 08:56:13.707178  109339 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1101 08:56:13.707185  109339 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1101 08:56:13.708960  109339 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1101 08:56:13.708985  109339 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1101 08:56:13.709079  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.709900  109339 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:56:13.710005  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1101 08:56:13.710082  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.715957  109339 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1101 08:56:13.717373  109339 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1101 08:56:13.717414  109339 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1101 08:56:13.717484  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.729311  109339 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1101 08:56:13.731532  109339 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-993117"
	I1101 08:56:13.731590  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.731758  109339 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:56:13.731775  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1101 08:56:13.731839  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.732123  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:13.736073  109339 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 08:56:13.738376  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.738672  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:13.739798  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.743495  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.743557  109339 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:56:13.743573  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 08:56:13.743635  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.744938  109339 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 08:56:13.754783  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.770265  109339 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 08:56:13.770293  109339 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 08:56:13.770350  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.773371  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.774071  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.775096  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.784370  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.792429  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.796002  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.805959  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.809493  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	W1101 08:56:13.809888  109339 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:56:13.810342  109339 retry.go:31] will retry after 234.782741ms: ssh: handshake failed: EOF
	I1101 08:56:13.810887  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.811558  109339 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W1101 08:56:13.812069  109339 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:56:13.812128  109339 retry.go:31] will retry after 225.126126ms: ssh: handshake failed: EOF
	I1101 08:56:13.812702  109339 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1101 08:56:13.814017  109339 out.go:179]   - Using image docker.io/busybox:stable
	I1101 08:56:13.815290  109339 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:56:13.815313  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1101 08:56:13.815373  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:13.820748  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	W1101 08:56:13.821758  109339 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1101 08:56:13.821784  109339 retry.go:31] will retry after 178.188905ms: ssh: handshake failed: EOF
	I1101 08:56:13.849547  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:13.909435  109339 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1101 08:56:13.909464  109339 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1101 08:56:13.919257  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1101 08:56:13.922656  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1101 08:56:13.926192  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1101 08:56:13.928818  109339 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1101 08:56:13.928836  109339 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1101 08:56:13.930297  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1101 08:56:13.930316  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1101 08:56:13.941188  109339 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1101 08:56:13.941225  109339 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1101 08:56:13.952645  109339 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1101 08:56:13.952673  109339 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1101 08:56:13.962322  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1101 08:56:13.962346  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1101 08:56:13.962538  109339 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:13.962551  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1101 08:56:13.967825  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1101 08:56:13.968968  109339 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1101 08:56:13.969033  109339 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1101 08:56:13.973025  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1101 08:56:13.973813  109339 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:56:13.973862  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1101 08:56:13.984654  109339 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:56:13.984681  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1101 08:56:13.991422  109339 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1101 08:56:13.991513  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1101 08:56:14.002557  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:14.003169  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1101 08:56:14.007789  109339 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1101 08:56:14.007821  109339 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1101 08:56:14.012318  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1101 08:56:14.015942  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1101 08:56:14.015969  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1101 08:56:14.021516  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1101 08:56:14.034314  109339 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1101 08:56:14.034351  109339 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1101 08:56:14.052112  109339 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1101 08:56:14.052144  109339 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1101 08:56:14.076115  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1101 08:56:14.076158  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1101 08:56:14.111123  109339 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1101 08:56:14.113069  109339 node_ready.go:35] waiting up to 6m0s for node "addons-993117" to be "Ready" ...
	I1101 08:56:14.115133  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1101 08:56:14.115156  109339 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1101 08:56:14.120067  109339 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:56:14.120094  109339 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1101 08:56:14.159307  109339 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1101 08:56:14.159353  109339 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1101 08:56:14.176642  109339 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:56:14.176668  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1101 08:56:14.195853  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1101 08:56:14.234076  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1101 08:56:14.234125  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1101 08:56:14.247219  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1101 08:56:14.252931  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 08:56:14.262765  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 08:56:14.281960  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1101 08:56:14.282007  109339 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1101 08:56:14.336374  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1101 08:56:14.383112  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1101 08:56:14.383145  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1101 08:56:14.430576  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1101 08:56:14.430606  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1101 08:56:14.489804  109339 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:56:14.489856  109339 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1101 08:56:14.544887  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1101 08:56:14.652125  109339 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-993117" context rescaled to 1 replicas
	I1101 08:56:15.235797  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.262732661s)
	I1101 08:56:15.235856  109339 addons.go:480] Verifying addon ingress=true in "addons-993117"
	I1101 08:56:15.236030  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.232775822s)
	I1101 08:56:15.235969  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.233311002s)
	W1101 08:56:15.236085  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:15.236093  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.22373794s)
	I1101 08:56:15.236110  109339 addons.go:480] Verifying addon registry=true in "addons-993117"
	I1101 08:56:15.236109  109339 retry.go:31] will retry after 143.298607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:15.236205  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.214647969s)
	I1101 08:56:15.236337  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.040437417s)
	I1101 08:56:15.236357  109339 addons.go:480] Verifying addon metrics-server=true in "addons-993117"
	I1101 08:56:15.237852  109339 out.go:179] * Verifying ingress addon...
	I1101 08:56:15.238871  109339 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-993117 service yakd-dashboard -n yakd-dashboard
	
	I1101 08:56:15.238880  109339 out.go:179] * Verifying registry addon...
	I1101 08:56:15.240868  109339 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1101 08:56:15.242165  109339 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1101 08:56:15.244605  109339 kapi.go:86] Found 2 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:56:15.244624  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:15.245001  109339 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:56:15.245017  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:15.380298  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:15.651574  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.404299736s)
	W1101 08:56:15.651633  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:56:15.651661  109339 retry.go:31] will retry after 240.796509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1101 08:56:15.651759  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.398804313s)
	I1101 08:56:15.651808  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.389018672s)
	I1101 08:56:15.652133  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.315715303s)
	I1101 08:56:15.652435  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.107500219s)
	I1101 08:56:15.652497  109339 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-993117"
	I1101 08:56:15.655158  109339 out.go:179] * Verifying csi-hostpath-driver addon...
	I1101 08:56:15.657731  109339 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1101 08:56:15.660841  109339 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:56:15.660864  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:15.769537  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:15.769754  109339 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1101 08:56:15.769771  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:15.893272  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1101 08:56:16.036218  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:16.036262  109339 retry.go:31] will retry after 392.94291ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:56:16.116423  109339 node_ready.go:57] node "addons-993117" has "Ready":"False" status (will retry)
	I1101 08:56:16.161067  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:16.244072  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:16.245598  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:16.430047  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:16.661436  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:16.761860  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:16.762105  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:17.161873  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:17.244472  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:17.244590  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:17.661899  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:17.744611  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:17.744859  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1101 08:56:18.116614  109339 node_ready.go:57] node "addons-993117" has "Ready":"False" status (will retry)
	I1101 08:56:18.160407  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:18.244107  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:18.244752  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:18.379531  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.486200378s)
	I1101 08:56:18.379592  109339 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.949517274s)
	W1101 08:56:18.379616  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:18.379633  109339 retry.go:31] will retry after 794.996455ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:18.661087  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:18.762458  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:18.762532  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:19.161797  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:19.174798  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:19.244599  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:19.245408  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:19.661474  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:56:19.744035  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:19.744074  109339 retry.go:31] will retry after 1.121376386s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:19.744955  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:19.745331  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:20.161142  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:20.244123  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:20.244717  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:56:20.616480  109339 node_ready.go:57] node "addons-993117" has "Ready":"False" status (will retry)
	I1101 08:56:20.662385  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:20.744266  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:20.744711  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:20.865754  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:21.161147  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:21.244730  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:21.245054  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:21.355138  109339 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1101 08:56:21.355205  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:21.376790  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	W1101 08:56:21.424718  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:21.424756  109339 retry.go:31] will retry after 1.247725993s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:21.485051  109339 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1101 08:56:21.497581  109339 addons.go:239] Setting addon gcp-auth=true in "addons-993117"
	I1101 08:56:21.497664  109339 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:56:21.498087  109339 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:56:21.516567  109339 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1101 08:56:21.516613  109339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:56:21.534285  109339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:56:21.632486  109339 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1101 08:56:21.634119  109339 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1101 08:56:21.635395  109339 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1101 08:56:21.635416  109339 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1101 08:56:21.649489  109339 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1101 08:56:21.649514  109339 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1101 08:56:21.661828  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:21.663054  109339 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:56:21.663073  109339 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1101 08:56:21.676527  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1101 08:56:21.744054  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:21.745588  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:21.987363  109339 addons.go:480] Verifying addon gcp-auth=true in "addons-993117"
	I1101 08:56:21.988833  109339 out.go:179] * Verifying gcp-auth addon...
	I1101 08:56:21.990814  109339 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1101 08:56:21.993110  109339 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1101 08:56:21.993125  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:22.160971  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:22.244078  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:22.245555  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:22.494484  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:22.661310  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:22.673391  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:22.744331  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:22.745844  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:22.994423  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:56:23.116269  109339 node_ready.go:57] node "addons-993117" has "Ready":"False" status (will retry)
	I1101 08:56:23.161260  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:56:23.216011  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:23.216047  109339 retry.go:31] will retry after 1.037960417s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:23.244060  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:23.244383  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:23.494202  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:23.661574  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:23.744664  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:23.744660  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:23.994635  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:24.161126  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:24.243894  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:24.245439  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:24.254628  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:24.494292  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:24.661495  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:24.744001  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:24.744990  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1101 08:56:24.797863  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:24.797896  109339 retry.go:31] will retry after 3.053906263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:24.995401  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:25.116797  109339 node_ready.go:49] node "addons-993117" is "Ready"
	I1101 08:56:25.116865  109339 node_ready.go:38] duration metric: took 11.003760435s for node "addons-993117" to be "Ready" ...
	I1101 08:56:25.116887  109339 api_server.go:52] waiting for apiserver process to appear ...
	I1101 08:56:25.116977  109339 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 08:56:25.134829  109339 api_server.go:72] duration metric: took 11.535579277s to wait for apiserver process to appear ...
	I1101 08:56:25.134860  109339 api_server.go:88] waiting for apiserver healthz status ...
	I1101 08:56:25.134885  109339 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1101 08:56:25.139615  109339 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1101 08:56:25.140670  109339 api_server.go:141] control plane version: v1.34.1
	I1101 08:56:25.140715  109339 api_server.go:131] duration metric: took 5.847732ms to wait for apiserver health ...
	I1101 08:56:25.140724  109339 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 08:56:25.146244  109339 system_pods.go:59] 20 kube-system pods found
	I1101 08:56:25.146284  109339 system_pods.go:61] "amd-gpu-device-plugin-ldw4v" [d8470b34-a718-4170-8f5a-08c89ef719f6] Pending
	I1101 08:56:25.146298  109339 system_pods.go:61] "coredns-66bc5c9577-fpzpv" [90913b1b-6b7d-428f-b9e4-faeddafa95ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:56:25.146305  109339 system_pods.go:61] "csi-hostpath-attacher-0" [8eb86797-b6e6-477f-b198-4ffc2834d53b] Pending
	I1101 08:56:25.146313  109339 system_pods.go:61] "csi-hostpath-resizer-0" [f538b791-0db5-404c-bc3e-d9793e0ad79e] Pending
	I1101 08:56:25.146318  109339 system_pods.go:61] "csi-hostpathplugin-vpnz6" [faf5fdcb-9600-4496-8fab-723b26e72a4d] Pending
	I1101 08:56:25.146323  109339 system_pods.go:61] "etcd-addons-993117" [01769101-ae6c-4278-ba0d-dd10ee066307] Running
	I1101 08:56:25.146329  109339 system_pods.go:61] "kindnet-5ln5h" [91f034ba-31e4-4857-8376-38426a1783ae] Running
	I1101 08:56:25.146335  109339 system_pods.go:61] "kube-apiserver-addons-993117" [bfe58862-7a79-43ca-ad37-eb331735f258] Running
	I1101 08:56:25.146345  109339 system_pods.go:61] "kube-controller-manager-addons-993117" [5fa456c5-43c8-4897-bac5-1f06c09d0242] Running
	I1101 08:56:25.146353  109339 system_pods.go:61] "kube-ingress-dns-minikube" [70b84fac-f831-40ae-aed1-ed0c6577288e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:56:25.146363  109339 system_pods.go:61] "kube-proxy-z7fst" [6e767c33-b0f8-43e9-b1bd-e57a53fd4781] Running
	I1101 08:56:25.146368  109339 system_pods.go:61] "kube-scheduler-addons-993117" [4c005b14-66e3-4940-8ed0-ee9f7ea81299] Running
	I1101 08:56:25.146376  109339 system_pods.go:61] "metrics-server-85b7d694d7-xfvx6" [e043da64-ca2f-49e1-8af9-25be09cdb56b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:56:25.146382  109339 system_pods.go:61] "nvidia-device-plugin-daemonset-hqm9x" [15bd754a-567b-486e-b302-958c6c35e01b] Pending
	I1101 08:56:25.146390  109339 system_pods.go:61] "registry-6b586f9694-785wk" [48d54e24-0425-4f8e-b67b-dc0f16dbcccc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:56:25.146398  109339 system_pods.go:61] "registry-creds-764b6fb674-9xsjx" [0ba7767f-afca-4206-9242-b5defbf3f5ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:56:25.146406  109339 system_pods.go:61] "registry-proxy-497v5" [3193b72b-c812-4490-b737-26cd9e00a032] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:56:25.146415  109339 system_pods.go:61] "snapshot-controller-7d9fbc56b8-sms8j" [cd1ab6d6-eb23-4cd0-ab4f-8c86f831ce4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.146421  109339 system_pods.go:61] "snapshot-controller-7d9fbc56b8-zl99q" [21db5fa9-5be8-4bb0-851f-267ac47683d4] Pending
	I1101 08:56:25.146427  109339 system_pods.go:61] "storage-provisioner" [f680fb14-9342-4545-bcb0-8b8195aa7950] Pending
	I1101 08:56:25.146435  109339 system_pods.go:74] duration metric: took 5.703263ms to wait for pod list to return data ...
	I1101 08:56:25.146451  109339 default_sa.go:34] waiting for default service account to be created ...
	I1101 08:56:25.151348  109339 default_sa.go:45] found service account: "default"
	I1101 08:56:25.151379  109339 default_sa.go:55] duration metric: took 4.921573ms for default service account to be created ...
	I1101 08:56:25.151392  109339 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 08:56:25.166874  109339 system_pods.go:86] 20 kube-system pods found
	I1101 08:56:25.166940  109339 system_pods.go:89] "amd-gpu-device-plugin-ldw4v" [d8470b34-a718-4170-8f5a-08c89ef719f6] Pending
	I1101 08:56:25.166955  109339 system_pods.go:89] "coredns-66bc5c9577-fpzpv" [90913b1b-6b7d-428f-b9e4-faeddafa95ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:56:25.166963  109339 system_pods.go:89] "csi-hostpath-attacher-0" [8eb86797-b6e6-477f-b198-4ffc2834d53b] Pending
	I1101 08:56:25.166974  109339 system_pods.go:89] "csi-hostpath-resizer-0" [f538b791-0db5-404c-bc3e-d9793e0ad79e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:56:25.166980  109339 system_pods.go:89] "csi-hostpathplugin-vpnz6" [faf5fdcb-9600-4496-8fab-723b26e72a4d] Pending
	I1101 08:56:25.166986  109339 system_pods.go:89] "etcd-addons-993117" [01769101-ae6c-4278-ba0d-dd10ee066307] Running
	I1101 08:56:25.166994  109339 system_pods.go:89] "kindnet-5ln5h" [91f034ba-31e4-4857-8376-38426a1783ae] Running
	I1101 08:56:25.167001  109339 system_pods.go:89] "kube-apiserver-addons-993117" [bfe58862-7a79-43ca-ad37-eb331735f258] Running
	I1101 08:56:25.167007  109339 system_pods.go:89] "kube-controller-manager-addons-993117" [5fa456c5-43c8-4897-bac5-1f06c09d0242] Running
	I1101 08:56:25.167018  109339 system_pods.go:89] "kube-ingress-dns-minikube" [70b84fac-f831-40ae-aed1-ed0c6577288e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:56:25.167024  109339 system_pods.go:89] "kube-proxy-z7fst" [6e767c33-b0f8-43e9-b1bd-e57a53fd4781] Running
	I1101 08:56:25.167031  109339 system_pods.go:89] "kube-scheduler-addons-993117" [4c005b14-66e3-4940-8ed0-ee9f7ea81299] Running
	I1101 08:56:25.167039  109339 system_pods.go:89] "metrics-server-85b7d694d7-xfvx6" [e043da64-ca2f-49e1-8af9-25be09cdb56b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:56:25.167044  109339 system_pods.go:89] "nvidia-device-plugin-daemonset-hqm9x" [15bd754a-567b-486e-b302-958c6c35e01b] Pending
	I1101 08:56:25.167053  109339 system_pods.go:89] "registry-6b586f9694-785wk" [48d54e24-0425-4f8e-b67b-dc0f16dbcccc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:56:25.167060  109339 system_pods.go:89] "registry-creds-764b6fb674-9xsjx" [0ba7767f-afca-4206-9242-b5defbf3f5ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:56:25.167068  109339 system_pods.go:89] "registry-proxy-497v5" [3193b72b-c812-4490-b737-26cd9e00a032] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:56:25.167077  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sms8j" [cd1ab6d6-eb23-4cd0-ab4f-8c86f831ce4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.167084  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zl99q" [21db5fa9-5be8-4bb0-851f-267ac47683d4] Pending
	I1101 08:56:25.167090  109339 system_pods.go:89] "storage-provisioner" [f680fb14-9342-4545-bcb0-8b8195aa7950] Pending
	I1101 08:56:25.167111  109339 retry.go:31] will retry after 311.66806ms: missing components: kube-dns
	I1101 08:56:25.173869  109339 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1101 08:56:25.173898  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:25.244720  109339 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1101 08:56:25.244744  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:25.245000  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:25.483437  109339 system_pods.go:86] 20 kube-system pods found
	I1101 08:56:25.483477  109339 system_pods.go:89] "amd-gpu-device-plugin-ldw4v" [d8470b34-a718-4170-8f5a-08c89ef719f6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:56:25.483515  109339 system_pods.go:89] "coredns-66bc5c9577-fpzpv" [90913b1b-6b7d-428f-b9e4-faeddafa95ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:56:25.483525  109339 system_pods.go:89] "csi-hostpath-attacher-0" [8eb86797-b6e6-477f-b198-4ffc2834d53b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:56:25.483533  109339 system_pods.go:89] "csi-hostpath-resizer-0" [f538b791-0db5-404c-bc3e-d9793e0ad79e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:56:25.483546  109339 system_pods.go:89] "csi-hostpathplugin-vpnz6" [faf5fdcb-9600-4496-8fab-723b26e72a4d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:56:25.483553  109339 system_pods.go:89] "etcd-addons-993117" [01769101-ae6c-4278-ba0d-dd10ee066307] Running
	I1101 08:56:25.483559  109339 system_pods.go:89] "kindnet-5ln5h" [91f034ba-31e4-4857-8376-38426a1783ae] Running
	I1101 08:56:25.483564  109339 system_pods.go:89] "kube-apiserver-addons-993117" [bfe58862-7a79-43ca-ad37-eb331735f258] Running
	I1101 08:56:25.483620  109339 system_pods.go:89] "kube-controller-manager-addons-993117" [5fa456c5-43c8-4897-bac5-1f06c09d0242] Running
	I1101 08:56:25.483629  109339 system_pods.go:89] "kube-ingress-dns-minikube" [70b84fac-f831-40ae-aed1-ed0c6577288e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:56:25.483634  109339 system_pods.go:89] "kube-proxy-z7fst" [6e767c33-b0f8-43e9-b1bd-e57a53fd4781] Running
	I1101 08:56:25.483640  109339 system_pods.go:89] "kube-scheduler-addons-993117" [4c005b14-66e3-4940-8ed0-ee9f7ea81299] Running
	I1101 08:56:25.483647  109339 system_pods.go:89] "metrics-server-85b7d694d7-xfvx6" [e043da64-ca2f-49e1-8af9-25be09cdb56b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:56:25.483660  109339 system_pods.go:89] "nvidia-device-plugin-daemonset-hqm9x" [15bd754a-567b-486e-b302-958c6c35e01b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:56:25.483669  109339 system_pods.go:89] "registry-6b586f9694-785wk" [48d54e24-0425-4f8e-b67b-dc0f16dbcccc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:56:25.483682  109339 system_pods.go:89] "registry-creds-764b6fb674-9xsjx" [0ba7767f-afca-4206-9242-b5defbf3f5ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:56:25.483694  109339 system_pods.go:89] "registry-proxy-497v5" [3193b72b-c812-4490-b737-26cd9e00a032] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:56:25.483702  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sms8j" [cd1ab6d6-eb23-4cd0-ab4f-8c86f831ce4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.483710  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zl99q" [21db5fa9-5be8-4bb0-851f-267ac47683d4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.483717  109339 system_pods.go:89] "storage-provisioner" [f680fb14-9342-4545-bcb0-8b8195aa7950] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:56:25.483737  109339 retry.go:31] will retry after 244.489184ms: missing components: kube-dns
	I1101 08:56:25.582581  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:25.683882  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:25.734094  109339 system_pods.go:86] 20 kube-system pods found
	I1101 08:56:25.734138  109339 system_pods.go:89] "amd-gpu-device-plugin-ldw4v" [d8470b34-a718-4170-8f5a-08c89ef719f6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:56:25.734151  109339 system_pods.go:89] "coredns-66bc5c9577-fpzpv" [90913b1b-6b7d-428f-b9e4-faeddafa95ca] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 08:56:25.734161  109339 system_pods.go:89] "csi-hostpath-attacher-0" [8eb86797-b6e6-477f-b198-4ffc2834d53b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:56:25.734169  109339 system_pods.go:89] "csi-hostpath-resizer-0" [f538b791-0db5-404c-bc3e-d9793e0ad79e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:56:25.734179  109339 system_pods.go:89] "csi-hostpathplugin-vpnz6" [faf5fdcb-9600-4496-8fab-723b26e72a4d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:56:25.734190  109339 system_pods.go:89] "etcd-addons-993117" [01769101-ae6c-4278-ba0d-dd10ee066307] Running
	I1101 08:56:25.734198  109339 system_pods.go:89] "kindnet-5ln5h" [91f034ba-31e4-4857-8376-38426a1783ae] Running
	I1101 08:56:25.734204  109339 system_pods.go:89] "kube-apiserver-addons-993117" [bfe58862-7a79-43ca-ad37-eb331735f258] Running
	I1101 08:56:25.734209  109339 system_pods.go:89] "kube-controller-manager-addons-993117" [5fa456c5-43c8-4897-bac5-1f06c09d0242] Running
	I1101 08:56:25.734217  109339 system_pods.go:89] "kube-ingress-dns-minikube" [70b84fac-f831-40ae-aed1-ed0c6577288e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:56:25.734227  109339 system_pods.go:89] "kube-proxy-z7fst" [6e767c33-b0f8-43e9-b1bd-e57a53fd4781] Running
	I1101 08:56:25.734233  109339 system_pods.go:89] "kube-scheduler-addons-993117" [4c005b14-66e3-4940-8ed0-ee9f7ea81299] Running
	I1101 08:56:25.734241  109339 system_pods.go:89] "metrics-server-85b7d694d7-xfvx6" [e043da64-ca2f-49e1-8af9-25be09cdb56b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:56:25.734249  109339 system_pods.go:89] "nvidia-device-plugin-daemonset-hqm9x" [15bd754a-567b-486e-b302-958c6c35e01b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:56:25.734260  109339 system_pods.go:89] "registry-6b586f9694-785wk" [48d54e24-0425-4f8e-b67b-dc0f16dbcccc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:56:25.734268  109339 system_pods.go:89] "registry-creds-764b6fb674-9xsjx" [0ba7767f-afca-4206-9242-b5defbf3f5ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:56:25.734280  109339 system_pods.go:89] "registry-proxy-497v5" [3193b72b-c812-4490-b737-26cd9e00a032] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:56:25.734288  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sms8j" [cd1ab6d6-eb23-4cd0-ab4f-8c86f831ce4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.734298  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zl99q" [21db5fa9-5be8-4bb0-851f-267ac47683d4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:25.734306  109339 system_pods.go:89] "storage-provisioner" [f680fb14-9342-4545-bcb0-8b8195aa7950] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 08:56:25.734329  109339 retry.go:31] will retry after 378.600191ms: missing components: kube-dns
	I1101 08:56:25.744493  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:25.744714  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:25.996391  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:26.119109  109339 system_pods.go:86] 20 kube-system pods found
	I1101 08:56:26.119162  109339 system_pods.go:89] "amd-gpu-device-plugin-ldw4v" [d8470b34-a718-4170-8f5a-08c89ef719f6] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1101 08:56:26.119178  109339 system_pods.go:89] "coredns-66bc5c9577-fpzpv" [90913b1b-6b7d-428f-b9e4-faeddafa95ca] Running
	I1101 08:56:26.119195  109339 system_pods.go:89] "csi-hostpath-attacher-0" [8eb86797-b6e6-477f-b198-4ffc2834d53b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1101 08:56:26.119210  109339 system_pods.go:89] "csi-hostpath-resizer-0" [f538b791-0db5-404c-bc3e-d9793e0ad79e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1101 08:56:26.119221  109339 system_pods.go:89] "csi-hostpathplugin-vpnz6" [faf5fdcb-9600-4496-8fab-723b26e72a4d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1101 08:56:26.119229  109339 system_pods.go:89] "etcd-addons-993117" [01769101-ae6c-4278-ba0d-dd10ee066307] Running
	I1101 08:56:26.119236  109339 system_pods.go:89] "kindnet-5ln5h" [91f034ba-31e4-4857-8376-38426a1783ae] Running
	I1101 08:56:26.119247  109339 system_pods.go:89] "kube-apiserver-addons-993117" [bfe58862-7a79-43ca-ad37-eb331735f258] Running
	I1101 08:56:26.119259  109339 system_pods.go:89] "kube-controller-manager-addons-993117" [5fa456c5-43c8-4897-bac5-1f06c09d0242] Running
	I1101 08:56:26.119274  109339 system_pods.go:89] "kube-ingress-dns-minikube" [70b84fac-f831-40ae-aed1-ed0c6577288e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1101 08:56:26.119284  109339 system_pods.go:89] "kube-proxy-z7fst" [6e767c33-b0f8-43e9-b1bd-e57a53fd4781] Running
	I1101 08:56:26.119291  109339 system_pods.go:89] "kube-scheduler-addons-993117" [4c005b14-66e3-4940-8ed0-ee9f7ea81299] Running
	I1101 08:56:26.119305  109339 system_pods.go:89] "metrics-server-85b7d694d7-xfvx6" [e043da64-ca2f-49e1-8af9-25be09cdb56b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1101 08:56:26.119318  109339 system_pods.go:89] "nvidia-device-plugin-daemonset-hqm9x" [15bd754a-567b-486e-b302-958c6c35e01b] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1101 08:56:26.119330  109339 system_pods.go:89] "registry-6b586f9694-785wk" [48d54e24-0425-4f8e-b67b-dc0f16dbcccc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1101 08:56:26.119338  109339 system_pods.go:89] "registry-creds-764b6fb674-9xsjx" [0ba7767f-afca-4206-9242-b5defbf3f5ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1101 08:56:26.119349  109339 system_pods.go:89] "registry-proxy-497v5" [3193b72b-c812-4490-b737-26cd9e00a032] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1101 08:56:26.119362  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-sms8j" [cd1ab6d6-eb23-4cd0-ab4f-8c86f831ce4c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:26.119372  109339 system_pods.go:89] "snapshot-controller-7d9fbc56b8-zl99q" [21db5fa9-5be8-4bb0-851f-267ac47683d4] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1101 08:56:26.119380  109339 system_pods.go:89] "storage-provisioner" [f680fb14-9342-4545-bcb0-8b8195aa7950] Running
	I1101 08:56:26.119392  109339 system_pods.go:126] duration metric: took 967.992599ms to wait for k8s-apps to be running ...
	I1101 08:56:26.119406  109339 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 08:56:26.119468  109339 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 08:56:26.137503  109339 system_svc.go:56] duration metric: took 18.086345ms WaitForService to wait for kubelet
	I1101 08:56:26.137534  109339 kubeadm.go:587] duration metric: took 12.538288508s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 08:56:26.137558  109339 node_conditions.go:102] verifying NodePressure condition ...
	I1101 08:56:26.141070  109339 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 08:56:26.141104  109339 node_conditions.go:123] node cpu capacity is 8
	I1101 08:56:26.141119  109339 node_conditions.go:105] duration metric: took 3.554596ms to run NodePressure ...
	I1101 08:56:26.141136  109339 start.go:242] waiting for startup goroutines ...
	I1101 08:56:26.161136  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:26.244109  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:26.245573  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:26.495240  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:26.661934  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:26.762615  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:26.762582  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:26.995236  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:27.161803  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:27.244862  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:27.245280  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:27.493837  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:27.661942  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:27.745119  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:27.745250  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:27.852520  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:27.993962  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:28.161744  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:28.246717  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:28.246762  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:28.493816  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:56:28.529383  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:28.529423  109339 retry.go:31] will retry after 4.93840652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:28.662588  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:28.763557  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:28.763554  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:28.995438  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:29.162118  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:29.247384  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:29.247450  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:29.494932  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:29.661681  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:29.745145  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:29.745345  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:29.994268  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:30.161943  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:30.244940  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:30.245373  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:30.494515  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:30.662824  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:30.764453  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:30.764570  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:30.994286  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:31.161709  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:31.244317  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:31.244948  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:31.493816  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:31.661481  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:31.744330  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:31.744719  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:31.993895  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:32.161657  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:32.244932  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:32.245032  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:32.494412  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:32.662213  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:32.745489  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:32.745540  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:32.994812  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:33.161355  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:33.244044  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:33.244661  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:33.468028  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:33.494787  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:33.662566  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:33.744663  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:33.745032  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:33.994530  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:56:34.008282  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:34.008314  109339 retry.go:31] will retry after 7.842026789s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:34.161976  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:34.245652  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:34.246039  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:34.495512  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:34.662500  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:34.745375  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:34.746283  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:34.994220  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:35.162056  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:35.244890  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:35.246782  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:35.496528  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:35.786070  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:35.786091  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:35.786383  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:35.994248  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:36.162016  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:36.245229  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:36.245384  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:36.494536  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:36.661717  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:36.744767  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:36.744982  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:36.994601  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:37.161960  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:37.244814  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:37.245177  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:37.494406  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:37.678943  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:37.745852  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:37.746083  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:37.994335  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:38.161472  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:38.244423  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:38.244978  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:38.494077  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:38.661328  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:38.743940  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:38.745884  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:38.994306  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:39.161603  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:39.244321  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:39.245009  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:39.494302  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:39.662045  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:39.745364  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:39.745366  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:39.994945  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:40.161533  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:40.245212  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:40.245447  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:40.494389  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:40.661523  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:40.744478  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:40.745007  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:40.994291  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:41.161314  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:41.243997  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:41.244524  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:41.494491  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:41.662207  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:41.744504  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:41.745553  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:41.850904  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:41.995662  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:42.162732  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:42.244350  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:42.245247  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:42.493752  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:42.661743  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1101 08:56:42.699170  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:42.699211  109339 retry.go:31] will retry after 11.303479007s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:42.744298  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:42.744628  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:42.994705  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:43.162046  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:43.244609  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:43.245168  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:43.576328  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:43.737011  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:43.778756  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:43.779028  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:43.993656  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:44.161991  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:44.245639  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:44.245961  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:44.494498  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:44.661782  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:44.744425  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:44.745867  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:44.994289  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:45.161690  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:45.245006  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:45.245149  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:45.493877  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:45.661339  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:45.744702  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:45.745120  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:45.994674  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:46.162521  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:46.244853  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:46.245012  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:46.497344  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:46.661382  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:46.762368  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:46.762543  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:46.994723  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:47.161439  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:47.261448  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:47.261448  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:47.494650  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:47.662088  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:47.745026  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:47.745230  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:47.994546  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:48.160972  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:48.244787  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:48.245512  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:48.494366  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:48.661765  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:48.744630  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:48.745005  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:48.993998  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:49.160770  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:49.244350  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:49.245218  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:49.493754  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:49.660860  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:49.744625  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:49.745163  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:49.994730  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:50.163052  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:50.244881  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:50.245027  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:50.493896  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:50.662134  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:50.744251  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:50.745980  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:50.994853  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:51.161509  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:51.244639  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:51.244943  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:51.494949  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:51.661333  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:51.747512  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:51.747543  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:51.994905  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:52.160881  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:52.244878  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:52.245193  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:52.493888  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:52.661379  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:52.744098  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:52.745168  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:52.994305  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:53.163706  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:53.244559  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:53.245254  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:53.493972  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:53.661060  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:53.745022  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:53.745067  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:53.994444  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:54.003675  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1101 08:56:54.161736  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:54.244484  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:54.245027  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:54.494275  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1101 08:56:54.551407  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:54.551444  109339 retry.go:31] will retry after 17.625597397s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:56:54.661525  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:54.744571  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:54.744984  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:54.994034  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:55.162205  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:55.245360  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:55.245635  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:55.495093  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:55.661540  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:55.744671  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:55.745099  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1101 08:56:55.994762  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:56.161820  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:56.244739  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:56.245185  109339 kapi.go:107] duration metric: took 41.003020729s to wait for kubernetes.io/minikube-addons=registry ...
	I1101 08:56:56.494624  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:56.661785  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:56.744855  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:56.993850  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:57.162856  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:57.244578  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:57.495362  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:57.662064  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:57.744859  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:57.994460  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:58.161402  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:58.244553  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:58.494725  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:58.661737  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:58.825352  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:58.994598  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:59.162176  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:59.244549  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:59.494091  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:56:59.662978  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:56:59.746502  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:56:59.999964  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:00.164525  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:00.244939  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:00.493763  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:00.661866  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:00.745118  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:00.993821  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:01.161121  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:01.275323  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:01.494355  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:01.664078  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:01.745650  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:01.995427  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:02.162488  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:02.244497  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:02.494993  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:02.661490  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:02.744429  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:02.994908  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:03.161519  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:03.264630  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:03.494836  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:03.661394  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:03.744683  109339 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1101 08:57:03.994775  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:04.160795  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:04.244685  109339 kapi.go:107] duration metric: took 49.003814841s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1101 08:57:04.494534  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:04.723627  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:04.994136  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:05.161339  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:05.493796  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:05.661295  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:05.994519  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:06.162349  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:06.494270  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:06.662185  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:06.995073  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1101 08:57:07.161980  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:07.494319  109339 kapi.go:107] duration metric: took 45.503501642s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1101 08:57:07.495602  109339 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-993117 cluster.
	I1101 08:57:07.497432  109339 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1101 08:57:07.498683  109339 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1101 08:57:07.663225  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:08.161563  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:08.661165  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:09.162317  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:09.661410  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:10.161291  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:10.661317  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:11.160989  109339 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1101 08:57:11.662456  109339 kapi.go:107] duration metric: took 56.0047251s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1101 08:57:12.177739  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:57:12.734441  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:57:12.734477  109339 retry.go:31] will retry after 16.494145924s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:57:29.230132  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:57:29.769200  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:57:29.769234  109339 retry.go:31] will retry after 41.872417481s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1101 08:58:11.644068  109339 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1101 08:58:12.189889  109339 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1101 08:58:12.190041  109339 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1101 08:58:12.192346  109339 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, cloud-spanner, registry-creds, metrics-server, yakd, storage-provisioner-rancher, storage-provisioner, ingress-dns, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1101 08:58:12.193974  109339 addons.go:515] duration metric: took 1m58.594666203s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin cloud-spanner registry-creds metrics-server yakd storage-provisioner-rancher storage-provisioner ingress-dns default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1101 08:58:12.194027  109339 start.go:247] waiting for cluster config update ...
	I1101 08:58:12.194057  109339 start.go:256] writing updated cluster config ...
	I1101 08:58:12.194343  109339 ssh_runner.go:195] Run: rm -f paused
	I1101 08:58:12.198555  109339 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:58:12.202749  109339 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fpzpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.207288  109339 pod_ready.go:94] pod "coredns-66bc5c9577-fpzpv" is "Ready"
	I1101 08:58:12.207312  109339 pod_ready.go:86] duration metric: took 4.537945ms for pod "coredns-66bc5c9577-fpzpv" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.209703  109339 pod_ready.go:83] waiting for pod "etcd-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.213776  109339 pod_ready.go:94] pod "etcd-addons-993117" is "Ready"
	I1101 08:58:12.213797  109339 pod_ready.go:86] duration metric: took 4.074176ms for pod "etcd-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.215542  109339 pod_ready.go:83] waiting for pod "kube-apiserver-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.219406  109339 pod_ready.go:94] pod "kube-apiserver-addons-993117" is "Ready"
	I1101 08:58:12.219429  109339 pod_ready.go:86] duration metric: took 3.866311ms for pod "kube-apiserver-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.221333  109339 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.603283  109339 pod_ready.go:94] pod "kube-controller-manager-addons-993117" is "Ready"
	I1101 08:58:12.603324  109339 pod_ready.go:86] duration metric: took 381.969936ms for pod "kube-controller-manager-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:12.802306  109339 pod_ready.go:83] waiting for pod "kube-proxy-z7fst" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:13.202602  109339 pod_ready.go:94] pod "kube-proxy-z7fst" is "Ready"
	I1101 08:58:13.202630  109339 pod_ready.go:86] duration metric: took 400.299281ms for pod "kube-proxy-z7fst" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:13.403073  109339 pod_ready.go:83] waiting for pod "kube-scheduler-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:13.803533  109339 pod_ready.go:94] pod "kube-scheduler-addons-993117" is "Ready"
	I1101 08:58:13.803571  109339 pod_ready.go:86] duration metric: took 400.467584ms for pod "kube-scheduler-addons-993117" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 08:58:13.803586  109339 pod_ready.go:40] duration metric: took 1.604993955s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 08:58:13.850600  109339 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 08:58:13.852415  109339 out.go:179] * Done! kubectl is now configured to use "addons-993117" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 08:58:07 addons-993117 crio[769]: time="2025-11-01T08:58:07.943587323Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 08:58:07 addons-993117 crio[769]: time="2025-11-01T08:58:07.943676738Z" level=info msg="Removed pod sandbox: 2a33ea18b524ed3a59aafc0b121f2c564a4b11b33c313ca217cc561ae6a126d1" id=ede20475-aad4-44f3-a267-6a7215eb88c7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.706295876Z" level=info msg="Running pod sandbox: default/busybox/POD" id=851cdbc2-8d4f-49a9-a9ab-65bb88157e4e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.706400474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.713601117Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:98b064e51932258b1dc2faaa4111db008a00f8d06505a6a41fc3370b344fe39d UID:a981116f-e99d-4594-8675-f889dd0ec9e5 NetNS:/var/run/netns/7a989b8c-ab35-4c06-be02-711f2a3bb5b6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e02658}] Aliases:map[]}"
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.713630385Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.724027157Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:98b064e51932258b1dc2faaa4111db008a00f8d06505a6a41fc3370b344fe39d UID:a981116f-e99d-4594-8675-f889dd0ec9e5 NetNS:/var/run/netns/7a989b8c-ab35-4c06-be02-711f2a3bb5b6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e02658}] Aliases:map[]}"
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.724158989Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.725024385Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.725808293Z" level=info msg="Ran pod sandbox 98b064e51932258b1dc2faaa4111db008a00f8d06505a6a41fc3370b344fe39d with infra container: default/busybox/POD" id=851cdbc2-8d4f-49a9-a9ab-65bb88157e4e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.727043466Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8cddddc-f8bd-4c78-af2b-026a473015f2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.727185832Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f8cddddc-f8bd-4c78-af2b-026a473015f2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.727231807Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f8cddddc-f8bd-4c78-af2b-026a473015f2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.727824741Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e8e9246a-e543-4188-a711-f3a4a266680b name=/runtime.v1.ImageService/PullImage
	Nov 01 08:58:14 addons-993117 crio[769]: time="2025-11-01T08:58:14.729583394Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 08:58:16 addons-993117 crio[769]: time="2025-11-01T08:58:16.918349635Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=e8e9246a-e543-4188-a711-f3a4a266680b name=/runtime.v1.ImageService/PullImage
	Nov 01 08:58:16 addons-993117 crio[769]: time="2025-11-01T08:58:16.91906043Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99593ab6-0621-4dd3-9494-acd6d8ff295c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:58:16 addons-993117 crio[769]: time="2025-11-01T08:58:16.920480933Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f2f6386a-0f28-4ed9-a6fc-095125e464d3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 08:58:16 addons-993117 crio[769]: time="2025-11-01T08:58:16.924025218Z" level=info msg="Creating container: default/busybox/busybox" id=6da77600-84a6-4f00-bf14-3decaf3062ea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:58:16 addons-993117 crio[769]: time="2025-11-01T08:58:16.924157667Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:58:16 addons-993117 crio[769]: time="2025-11-01T08:58:16.930161173Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:58:16 addons-993117 crio[769]: time="2025-11-01T08:58:16.9306269Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 08:58:16 addons-993117 crio[769]: time="2025-11-01T08:58:16.963725485Z" level=info msg="Created container 5d68fdf48825c3c2d237b649175a7663efd383a535a242c0d0341e3465622a4a: default/busybox/busybox" id=6da77600-84a6-4f00-bf14-3decaf3062ea name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 08:58:16 addons-993117 crio[769]: time="2025-11-01T08:58:16.964341504Z" level=info msg="Starting container: 5d68fdf48825c3c2d237b649175a7663efd383a535a242c0d0341e3465622a4a" id=d21fcad9-7f0e-44cc-abee-fd2b9d6dc173 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 08:58:16 addons-993117 crio[769]: time="2025-11-01T08:58:16.966422331Z" level=info msg="Started container" PID=6768 containerID=5d68fdf48825c3c2d237b649175a7663efd383a535a242c0d0341e3465622a4a description=default/busybox/busybox id=d21fcad9-7f0e-44cc-abee-fd2b9d6dc173 name=/runtime.v1.RuntimeService/StartContainer sandboxID=98b064e51932258b1dc2faaa4111db008a00f8d06505a6a41fc3370b344fe39d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	5d68fdf48825c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago        Running             busybox                                  0                   98b064e519322       busybox                                     default
	3b35b0d070189       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          About a minute ago   Running             csi-snapshotter                          0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	8dc1437b90151       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          About a minute ago   Running             csi-provisioner                          0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	804b66311e935       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            About a minute ago   Running             liveness-probe                           0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	2821b9f559e62       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           About a minute ago   Running             hostpath                                 0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	8b1063e261681       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 About a minute ago   Running             gcp-auth                                 0                   19a2d27ca5086       gcp-auth-78565c9fb4-g7cqf                   gcp-auth
	b08eff5e2d492       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                About a minute ago   Running             node-driver-registrar                    0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	571bf53478339       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             About a minute ago   Running             controller                               0                   cc20cb30fbcef       ingress-nginx-controller-675c5ddd98-8fg7m   ingress-nginx
	2e7ae1f1e2452       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            About a minute ago   Running             gadget                                   0                   d8c3f9ab14021       gadget-92zrk                                gadget
	b6a9d7748ccc5       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              About a minute ago   Running             registry-proxy                           0                   3460a4aad78f4       registry-proxy-497v5                        kube-system
	751b58c8fd0aa       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     About a minute ago   Running             amd-gpu-device-plugin                    0                   e3a1cf7764ab6       amd-gpu-device-plugin-ldw4v                 kube-system
	cf726f61ce62e       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     About a minute ago   Running             nvidia-device-plugin-ctr                 0                   f342fa87f62b7       nvidia-device-plugin-daemonset-hqm9x        kube-system
	cfc14b381b0aa       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   50a1c70c688c9       csi-hostpathplugin-vpnz6                    kube-system
	10ebfd823db73       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   e7e2357557149       snapshot-controller-7d9fbc56b8-zl99q        kube-system
	7933addcfb16f       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   d7b05be0b0bc1       snapshot-controller-7d9fbc56b8-sms8j        kube-system
	1fb99f095c842       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   0fbf167f5bbdf       csi-hostpath-attacher-0                     kube-system
	847964df5e7f5       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   de40dd2b545a2       csi-hostpath-resizer-0                      kube-system
	9ccdb57d51398       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              patch                                    0                   16a7229b5dd32       ingress-nginx-admission-patch-t2bh9         ingress-nginx
	a6545c336c6d0       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   a585e7a3cc894       yakd-dashboard-5ff678cb9-b4vdn              yakd-dashboard
	b9a7d4e2630d2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   fcf4a1a0bae6f       ingress-nginx-admission-create-p6ghj        ingress-nginx
	31237bb5ad80b       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   de5c323f8e9d4       local-path-provisioner-648f6765c9-cszjc     local-path-storage
	903d4bbf18d4c       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   b936419bd7cb6       kube-ingress-dns-minikube                   kube-system
	a0559bd812da6       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   83fa7d0923f27       registry-6b586f9694-785wk                   kube-system
	32c59991365e7       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   53765ec0d3bdd       cloud-spanner-emulator-86bd5cbb97-gbmhj     default
	0d2603d622294       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   6f50ba1952109       metrics-server-85b7d694d7-xfvx6             kube-system
	d1be24b1775c4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             2 minutes ago        Running             storage-provisioner                      0                   a3338291f8bbc       storage-provisioner                         kube-system
	3bd1589cbc2c1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             2 minutes ago        Running             coredns                                  0                   817c2c2ee349a       coredns-66bc5c9577-fpzpv                    kube-system
	4d446343c7b2f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago        Running             kindnet-cni                              0                   75245beea1e10       kindnet-5ln5h                               kube-system
	9838dcae88ecb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             2 minutes ago        Running             kube-proxy                               0                   fa5bb0d34c3fc       kube-proxy-z7fst                            kube-system
	4ff46c8fd9e89       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   97649c15ee686       kube-controller-manager-addons-993117       kube-system
	780e64dcae645       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   db84185288f9b       etcd-addons-993117                          kube-system
	1c79567a55106       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   8e5bafffc8916       kube-apiserver-addons-993117                kube-system
	cc887abb01e9d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   e6e2340283b81       kube-scheduler-addons-993117                kube-system
	
	
	==> coredns [3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede] <==
	[INFO] 10.244.0.16:46742 - 16500 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003581059s
	[INFO] 10.244.0.16:42156 - 33260 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000078286s
	[INFO] 10.244.0.16:42156 - 32854 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000074873s
	[INFO] 10.244.0.16:33785 - 41355 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000066791s
	[INFO] 10.244.0.16:33785 - 41084 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000110334s
	[INFO] 10.244.0.16:54339 - 58877 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000063639s
	[INFO] 10.244.0.16:54339 - 59101 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000095953s
	[INFO] 10.244.0.16:42425 - 33935 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000112653s
	[INFO] 10.244.0.16:42425 - 33749 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000154918s
	[INFO] 10.244.0.22:46586 - 12625 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00026106s
	[INFO] 10.244.0.22:35395 - 54309 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000193797s
	[INFO] 10.244.0.22:48351 - 40534 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112209s
	[INFO] 10.244.0.22:35111 - 22564 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000638815s
	[INFO] 10.244.0.22:53052 - 17541 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132658s
	[INFO] 10.244.0.22:54870 - 37555 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129156s
	[INFO] 10.244.0.22:38631 - 29760 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003455769s
	[INFO] 10.244.0.22:44459 - 52608 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003519001s
	[INFO] 10.244.0.22:51975 - 31969 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004685374s
	[INFO] 10.244.0.22:33467 - 57724 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005185573s
	[INFO] 10.244.0.22:43418 - 23601 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.00542252s
	[INFO] 10.244.0.22:33019 - 52769 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006848902s
	[INFO] 10.244.0.22:39233 - 39848 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004791417s
	[INFO] 10.244.0.22:51901 - 47745 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004915754s
	[INFO] 10.244.0.22:53901 - 721 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001074142s
	[INFO] 10.244.0.22:48056 - 11565 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001161465s
	
	
	==> describe nodes <==
	Name:               addons-993117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-993117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=addons-993117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T08_56_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-993117
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-993117"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 08:56:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-993117
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 08:58:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 08:58:10 +0000   Sat, 01 Nov 2025 08:56:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 08:58:10 +0000   Sat, 01 Nov 2025 08:56:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 08:58:10 +0000   Sat, 01 Nov 2025 08:56:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 08:58:10 +0000   Sat, 01 Nov 2025 08:56:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-993117
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7df020e8-7b12-4d73-ac54-ad61f7ee33f3
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-86bd5cbb97-gbmhj      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  gadget                      gadget-92zrk                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  gcp-auth                    gcp-auth-78565c9fb4-g7cqf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-8fg7m    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m11s
	  kube-system                 amd-gpu-device-plugin-ldw4v                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 coredns-66bc5c9577-fpzpv                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m13s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 csi-hostpathplugin-vpnz6                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 etcd-addons-993117                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m18s
	  kube-system                 kindnet-5ln5h                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m13s
	  kube-system                 kube-apiserver-addons-993117                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-controller-manager-addons-993117        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-proxy-z7fst                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-scheduler-addons-993117                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 metrics-server-85b7d694d7-xfvx6              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m12s
	  kube-system                 nvidia-device-plugin-daemonset-hqm9x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-6b586f9694-785wk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 registry-creds-764b6fb674-9xsjx              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 registry-proxy-497v5                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-sms8j         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 snapshot-controller-7d9fbc56b8-zl99q         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m11s
	  local-path-storage          local-path-provisioner-648f6765c9-cszjc      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-b4vdn               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 2m11s  kube-proxy       
	  Normal  Starting                 2m19s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m18s  kubelet          Node addons-993117 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m18s  kubelet          Node addons-993117 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m18s  kubelet          Node addons-993117 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m14s  node-controller  Node addons-993117 event: Registered Node addons-993117 in Controller
	  Normal  NodeReady                2m2s   kubelet          Node addons-993117 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 fe c2 bd be b4 08 06
	[Nov 1 08:41] IPv4: martian source 10.244.0.1 from 10.244.0.35, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff be 00 2a 05 3c 0c 08 06
	[ +26.487894] IPv4: martian source 10.244.0.1 from 10.244.0.39, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 92 96 06 fd 62 1d 08 06
	[Nov 1 08:43] IPv4: martian source 10.244.0.1 from 10.244.0.45, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 97 f5 39 58 ed 08 06
	[Nov 1 08:44] IPv4: martian source 10.244.0.1 from 10.244.0.46, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 0a 09 0e 06 07 38 08 06
	[Nov 1 08:45] IPv4: martian source 10.244.0.1 from 10.244.0.47, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff f6 12 4c 5b 0b 63 08 06
	[  +0.000011] IPv4: martian source 10.244.0.1 from 10.244.0.48, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff da 70 26 3d 3a b7 08 06
	[ +25.943756] IPv4: martian source 10.244.0.1 from 10.244.0.49, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 0e ab 7e ea ad d6 08 06
	[Nov 1 08:46] IPv4: martian source 10.244.0.1 from 10.244.0.50, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 9e 08 5c 51 d7 d0 08 06
	[Nov 1 08:47] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 2a d0 d6 6a b5 8a 08 06
	[ +15.876054] IPv4: martian source 10.244.0.1 from 10.244.0.52, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ba 99 54 24 38 74 08 06
	[Nov 1 08:48] IPv4: martian source 10.244.0.1 from 10.244.0.53, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 53 1e 0b f5 f9 08 06
	[ +20.616610] IPv4: martian source 10.244.0.1 from 10.244.0.54, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 5d 8b 4b c3 ca 08 06
	
	
	==> etcd [780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404] <==
	{"level":"warn","ts":"2025-11-01T08:56:04.822048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.828125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.834535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.840564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.846648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.852973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.859188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.865660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.872688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.878870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.885246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.892073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.912752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.919068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.925533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:04.975766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:16.121119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:16.127489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44098","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:35.783444Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.248766ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T08:56:35.783665Z","caller":"traceutil/trace.go:172","msg":"trace[833785441] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:972; }","duration":"123.473715ms","start":"2025-11-01T08:56:35.660166Z","end":"2025-11-01T08:56:35.783640Z","steps":["trace[833785441] 'range keys from in-memory index tree'  (duration: 123.168305ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T08:56:42.379829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:42.397128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:42.421946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T08:56:42.430932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53746","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T08:56:58.823865Z","caller":"traceutil/trace.go:172","msg":"trace[1608055675] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"142.82337ms","start":"2025-11-01T08:56:58.681023Z","end":"2025-11-01T08:56:58.823847Z","steps":["trace[1608055675] 'process raft request'  (duration: 142.65258ms)"],"step_count":1}
	
	
	==> gcp-auth [8b1063e261681593de2333b65c9abd4b740fd7fe445a8fc5c87d459bf5213f20] <==
	2025/11/01 08:57:06 GCP Auth Webhook started!
	2025/11/01 08:58:14 Ready to marshal response ...
	2025/11/01 08:58:14 Ready to write response ...
	2025/11/01 08:58:14 Ready to marshal response ...
	2025/11/01 08:58:14 Ready to write response ...
	2025/11/01 08:58:14 Ready to marshal response ...
	2025/11/01 08:58:14 Ready to write response ...
	
	
	==> kernel <==
	 08:58:26 up 40 min,  0 user,  load average: 0.68, 0.89, 0.71
	Linux addons-993117 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352] <==
	I1101 08:56:24.832309       1 main.go:301] handling current node
	I1101 08:56:34.832810       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:56:34.832871       1 main.go:301] handling current node
	I1101 08:56:44.832539       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:56:44.832571       1 main.go:301] handling current node
	I1101 08:56:54.832858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:56:54.832895       1 main.go:301] handling current node
	I1101 08:57:04.832562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:57:04.832594       1 main.go:301] handling current node
	I1101 08:57:14.832999       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:57:14.833035       1 main.go:301] handling current node
	I1101 08:57:24.832962       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:57:24.833001       1 main.go:301] handling current node
	I1101 08:57:34.832789       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:57:34.832824       1 main.go:301] handling current node
	I1101 08:57:44.833134       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:57:44.833166       1 main.go:301] handling current node
	I1101 08:57:54.833000       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:57:54.833035       1 main.go:301] handling current node
	I1101 08:58:04.832795       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:58:04.832827       1 main.go:301] handling current node
	I1101 08:58:14.833199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:58:14.833230       1 main.go:301] handling current node
	I1101 08:58:24.832796       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 08:58:24.832840       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286] <==
	I1101 08:56:21.930234       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.187.34"}
	W1101 08:56:24.961947       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.187.34:443: connect: connection refused
	E1101 08:56:24.961998       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.187.34:443: connect: connection refused" logger="UnhandledError"
	W1101 08:56:24.963478       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.187.34:443: connect: connection refused
	E1101 08:56:24.963639       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.187.34:443: connect: connection refused" logger="UnhandledError"
	W1101 08:56:24.988853       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.187.34:443: connect: connection refused
	E1101 08:56:24.988892       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.187.34:443: connect: connection refused" logger="UnhandledError"
	W1101 08:56:24.989697       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.187.34:443: connect: connection refused
	E1101 08:56:24.989731       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.187.34:443: connect: connection refused" logger="UnhandledError"
	E1101 08:56:28.032729       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.118.136:443: connect: connection refused" logger="UnhandledError"
	W1101 08:56:28.032984       1 handler_proxy.go:99] no RequestInfo found in the context
	E1101 08:56:28.033081       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1101 08:56:28.033358       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.118.136:443: connect: connection refused" logger="UnhandledError"
	E1101 08:56:28.039345       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.118.136:443: connect: connection refused" logger="UnhandledError"
	E1101 08:56:28.061105       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.118.136:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.118.136:443: connect: connection refused" logger="UnhandledError"
	I1101 08:56:28.132690       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1101 08:56:42.379698       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 08:56:42.392871       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 08:56:42.421960       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1101 08:56:42.430991       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1101 08:58:24.549658       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37440: use of closed network connection
	E1101 08:58:24.707835       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:37448: use of closed network connection
	
	
	==> kube-controller-manager [4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719] <==
	I1101 08:56:12.350723       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 08:56:12.350742       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 08:56:12.350765       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 08:56:12.350765       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-993117"
	I1101 08:56:12.350812       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 08:56:12.351038       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 08:56:12.351038       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 08:56:12.351271       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 08:56:12.351282       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 08:56:12.352169       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 08:56:12.352189       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 08:56:12.352243       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 08:56:12.352421       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 08:56:12.352763       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 08:56:12.353986       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:56:12.355068       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 08:56:12.362777       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 08:56:12.370218       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 08:56:14.898750       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1101 08:56:27.355843       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1101 08:56:42.361711       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1101 08:56:42.361789       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1101 08:56:42.387832       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1101 08:56:42.462474       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 08:56:42.488855       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf] <==
	I1101 08:56:14.283036       1 server_linux.go:53] "Using iptables proxy"
	I1101 08:56:14.620464       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 08:56:14.735655       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 08:56:14.737539       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 08:56:14.738988       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 08:56:14.875285       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 08:56:14.875349       1 server_linux.go:132] "Using iptables Proxier"
	I1101 08:56:14.891261       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 08:56:14.892760       1 server.go:527] "Version info" version="v1.34.1"
	I1101 08:56:14.892832       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 08:56:14.895277       1 config.go:200] "Starting service config controller"
	I1101 08:56:14.895353       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 08:56:14.895536       1 config.go:106] "Starting endpoint slice config controller"
	I1101 08:56:14.895591       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 08:56:14.896319       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 08:56:14.896335       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 08:56:14.904797       1 config.go:309] "Starting node config controller"
	I1101 08:56:14.904851       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 08:56:14.904860       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 08:56:15.010700       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 08:56:15.010804       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 08:56:15.011155       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252] <==
	E1101 08:56:05.375271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:56:05.375319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:56:05.375339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:56:05.375371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 08:56:05.375393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:56:05.375415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:56:05.375559       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 08:56:05.375692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:56:05.375716       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:56:05.375731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 08:56:05.376222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 08:56:05.376287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:56:05.376314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 08:56:06.204430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 08:56:06.250731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 08:56:06.310281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 08:56:06.351304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 08:56:06.423221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 08:56:06.476648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 08:56:06.487743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 08:56:06.553818       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 08:56:06.588199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 08:56:06.601405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 08:56:06.822731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 08:56:08.974238       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 08:56:54 addons-993117 kubelet[1272]: I1101 08:56:54.125312    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ldw4v" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:56:54 addons-993117 kubelet[1272]: I1101 08:56:54.137247    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-ldw4v" podStartSLOduration=2.123559987 podStartE2EDuration="30.13722451s" podCreationTimestamp="2025-11-01 08:56:24 +0000 UTC" firstStartedPulling="2025-11-01 08:56:25.428876532 +0000 UTC m=+17.591670616" lastFinishedPulling="2025-11-01 08:56:53.442541065 +0000 UTC m=+45.605335139" observedRunningTime="2025-11-01 08:56:54.135823677 +0000 UTC m=+46.298617770" watchObservedRunningTime="2025-11-01 08:56:54.13722451 +0000 UTC m=+46.300018602"
	Nov 01 08:56:55 addons-993117 kubelet[1272]: I1101 08:56:55.128555    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ldw4v" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:56:56 addons-993117 kubelet[1272]: I1101 08:56:56.133824    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-497v5" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:56:56 addons-993117 kubelet[1272]: I1101 08:56:56.144188    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-497v5" podStartSLOduration=1.5282509819999999 podStartE2EDuration="32.144170727s" podCreationTimestamp="2025-11-01 08:56:24 +0000 UTC" firstStartedPulling="2025-11-01 08:56:25.434794294 +0000 UTC m=+17.597588377" lastFinishedPulling="2025-11-01 08:56:56.050714038 +0000 UTC m=+48.213508122" observedRunningTime="2025-11-01 08:56:56.143988052 +0000 UTC m=+48.306782142" watchObservedRunningTime="2025-11-01 08:56:56.144170727 +0000 UTC m=+48.306964816"
	Nov 01 08:56:56 addons-993117 kubelet[1272]: E1101 08:56:56.828509    1272 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 01 08:56:56 addons-993117 kubelet[1272]: E1101 08:56:56.828589    1272 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba7767f-afca-4206-9242-b5defbf3f5ee-gcr-creds podName:0ba7767f-afca-4206-9242-b5defbf3f5ee nodeName:}" failed. No retries permitted until 2025-11-01 08:57:28.828576117 +0000 UTC m=+80.991370187 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/0ba7767f-afca-4206-9242-b5defbf3f5ee-gcr-creds") pod "registry-creds-764b6fb674-9xsjx" (UID: "0ba7767f-afca-4206-9242-b5defbf3f5ee") : secret "registry-creds-gcr" not found
	Nov 01 08:56:57 addons-993117 kubelet[1272]: I1101 08:56:57.137385    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-497v5" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:57:00 addons-993117 kubelet[1272]: I1101 08:57:00.176263    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-92zrk" podStartSLOduration=17.997876258 podStartE2EDuration="46.176240772s" podCreationTimestamp="2025-11-01 08:56:14 +0000 UTC" firstStartedPulling="2025-11-01 08:56:31.141764223 +0000 UTC m=+23.304558293" lastFinishedPulling="2025-11-01 08:56:59.320128733 +0000 UTC m=+51.482922807" observedRunningTime="2025-11-01 08:57:00.174881586 +0000 UTC m=+52.337675677" watchObservedRunningTime="2025-11-01 08:57:00.176240772 +0000 UTC m=+52.339034862"
	Nov 01 08:57:04 addons-993117 kubelet[1272]: I1101 08:57:04.180508    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-8fg7m" podStartSLOduration=27.003719278 podStartE2EDuration="49.180485969s" podCreationTimestamp="2025-11-01 08:56:15 +0000 UTC" firstStartedPulling="2025-11-01 08:56:40.956348985 +0000 UTC m=+33.119143066" lastFinishedPulling="2025-11-01 08:57:03.133115683 +0000 UTC m=+55.295909757" observedRunningTime="2025-11-01 08:57:04.179405824 +0000 UTC m=+56.342199925" watchObservedRunningTime="2025-11-01 08:57:04.180485969 +0000 UTC m=+56.343280060"
	Nov 01 08:57:07 addons-993117 kubelet[1272]: I1101 08:57:07.196506    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-g7cqf" podStartSLOduration=36.834574404 podStartE2EDuration="46.196482124s" podCreationTimestamp="2025-11-01 08:56:21 +0000 UTC" firstStartedPulling="2025-11-01 08:56:57.294941738 +0000 UTC m=+49.457735810" lastFinishedPulling="2025-11-01 08:57:06.656849456 +0000 UTC m=+58.819643530" observedRunningTime="2025-11-01 08:57:07.194535745 +0000 UTC m=+59.357329835" watchObservedRunningTime="2025-11-01 08:57:07.196482124 +0000 UTC m=+59.359276216"
	Nov 01 08:57:07 addons-993117 kubelet[1272]: I1101 08:57:07.966314    1272 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 01 08:57:07 addons-993117 kubelet[1272]: I1101 08:57:07.966365    1272 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 01 08:57:11 addons-993117 kubelet[1272]: I1101 08:57:11.226314    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-vpnz6" podStartSLOduration=2.172744818 podStartE2EDuration="47.226291758s" podCreationTimestamp="2025-11-01 08:56:24 +0000 UTC" firstStartedPulling="2025-11-01 08:56:25.425337426 +0000 UTC m=+17.588131516" lastFinishedPulling="2025-11-01 08:57:10.478884382 +0000 UTC m=+62.641678456" observedRunningTime="2025-11-01 08:57:11.224323444 +0000 UTC m=+63.387117550" watchObservedRunningTime="2025-11-01 08:57:11.226291758 +0000 UTC m=+63.389085850"
	Nov 01 08:57:19 addons-993117 kubelet[1272]: I1101 08:57:19.924443    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149954a9-29b6-4840-bb1b-f7163525254f" path="/var/lib/kubelet/pods/149954a9-29b6-4840-bb1b-f7163525254f/volumes"
	Nov 01 08:57:21 addons-993117 kubelet[1272]: I1101 08:57:21.924533    1272 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ae25cd9-280c-4202-865f-ff778a48fece" path="/var/lib/kubelet/pods/3ae25cd9-280c-4202-865f-ff778a48fece/volumes"
	Nov 01 08:57:28 addons-993117 kubelet[1272]: E1101 08:57:28.884250    1272 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 01 08:57:28 addons-993117 kubelet[1272]: E1101 08:57:28.884364    1272 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ba7767f-afca-4206-9242-b5defbf3f5ee-gcr-creds podName:0ba7767f-afca-4206-9242-b5defbf3f5ee nodeName:}" failed. No retries permitted until 2025-11-01 08:58:32.884348809 +0000 UTC m=+145.047142891 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/0ba7767f-afca-4206-9242-b5defbf3f5ee-gcr-creds") pod "registry-creds-764b6fb674-9xsjx" (UID: "0ba7767f-afca-4206-9242-b5defbf3f5ee") : secret "registry-creds-gcr" not found
	Nov 01 08:58:00 addons-993117 kubelet[1272]: I1101 08:58:00.921120    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-hqm9x" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:58:07 addons-993117 kubelet[1272]: I1101 08:58:07.917898    1272 scope.go:117] "RemoveContainer" containerID="afcde6dfcd98dabd7f9d74985df25c04171ec86f4f225fe72646c0914a4a52e3"
	Nov 01 08:58:07 addons-993117 kubelet[1272]: I1101 08:58:07.926603    1272 scope.go:117] "RemoveContainer" containerID="395298bfc0fdfb19abf8176e2eaf4347f065724864738400c53a5d9cb0c68037"
	Nov 01 08:58:09 addons-993117 kubelet[1272]: I1101 08:58:09.922029    1272 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-ldw4v" secret="" err="secret \"gcp-auth\" not found"
	Nov 01 08:58:14 addons-993117 kubelet[1272]: I1101 08:58:14.549652    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a981116f-e99d-4594-8675-f889dd0ec9e5-gcp-creds\") pod \"busybox\" (UID: \"a981116f-e99d-4594-8675-f889dd0ec9e5\") " pod="default/busybox"
	Nov 01 08:58:14 addons-993117 kubelet[1272]: I1101 08:58:14.549728    1272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktchp\" (UniqueName: \"kubernetes.io/projected/a981116f-e99d-4594-8675-f889dd0ec9e5-kube-api-access-ktchp\") pod \"busybox\" (UID: \"a981116f-e99d-4594-8675-f889dd0ec9e5\") " pod="default/busybox"
	Nov 01 08:58:17 addons-993117 kubelet[1272]: I1101 08:58:17.468682    1272 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.2763169890000001 podStartE2EDuration="3.468657853s" podCreationTimestamp="2025-11-01 08:58:14 +0000 UTC" firstStartedPulling="2025-11-01 08:58:14.727463797 +0000 UTC m=+126.890257867" lastFinishedPulling="2025-11-01 08:58:16.919804657 +0000 UTC m=+129.082598731" observedRunningTime="2025-11-01 08:58:17.468166756 +0000 UTC m=+129.630960848" watchObservedRunningTime="2025-11-01 08:58:17.468657853 +0000 UTC m=+129.631451944"
	
	
	==> storage-provisioner [d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818] <==
	W1101 08:58:02.059302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:04.062323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:04.067636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:06.071104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:06.075028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:08.078173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:08.081983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:10.084944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:10.089809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:12.093229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:12.097749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:14.100946       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:14.105080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:16.108281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:16.112499       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:18.115671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:18.120473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:20.123788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:20.127877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:22.131444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:22.135086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:24.137938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:24.142829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:26.147479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 08:58:26.152242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-993117 -n addons-993117
helpers_test.go:269: (dbg) Run:  kubectl --context addons-993117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: task-pv-pod ingress-nginx-admission-create-p6ghj ingress-nginx-admission-patch-t2bh9 registry-creds-764b6fb674-9xsjx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-993117 describe pod task-pv-pod ingress-nginx-admission-create-p6ghj ingress-nginx-admission-patch-t2bh9 registry-creds-764b6fb674-9xsjx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-993117 describe pod task-pv-pod ingress-nginx-admission-create-p6ghj ingress-nginx-admission-patch-t2bh9 registry-creds-764b6fb674-9xsjx: exit status 1 (69.861342ms)

                                                
                                                
-- stdout --
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-993117/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 08:58:27 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP (http-server)
	    Host Port:      0/TCP (http-server)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7wtbk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-7wtbk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  0s    default-scheduler  Successfully assigned default/task-pv-pod to addons-993117

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-p6ghj" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-t2bh9" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-9xsjx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-993117 describe pod task-pv-pod ingress-nginx-admission-create-p6ghj ingress-nginx-admission-patch-t2bh9 registry-creds-764b6fb674-9xsjx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable headlamp --alsologtostderr -v=1: exit status 11 (261.371672ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:27.468024  118994 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:27.468365  118994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:27.468380  118994 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:27.468388  118994 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:27.468664  118994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:27.469064  118994 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:27.469567  118994 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:27.469597  118994 addons.go:607] checking whether the cluster is paused
	I1101 08:58:27.469740  118994 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:27.469765  118994 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:27.470427  118994 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:27.488797  118994 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:27.488883  118994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:27.507606  118994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:27.608807  118994 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:27.608877  118994 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:27.641045  118994 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:27.641079  118994 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:27.641085  118994 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:27.641090  118994 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:27.641093  118994 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:27.641099  118994 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:27.641102  118994 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:27.641106  118994 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:27.641109  118994 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:27.641124  118994 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:27.641129  118994 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:27.641134  118994 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:27.641138  118994 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:27.641143  118994 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:27.641148  118994 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:27.641159  118994 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:27.641167  118994 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:27.641173  118994 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:27.641177  118994 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:27.641180  118994 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:27.641184  118994 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:27.641188  118994 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:27.641193  118994 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:27.641211  118994 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:27.641215  118994 cri.go:89] found id: ""
	I1101 08:58:27.641285  118994 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:27.657371  118994 out.go:203] 
	W1101 08:58:27.658809  118994 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:27.658826  118994 out.go:285] * 
	* 
	W1101 08:58:27.662039  118994 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:27.663461  118994 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-gbmhj" [2aa83dc4-7284-491f-88b9-72b76f1c2a28] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003017572s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (255.156423ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:44.256240  120983 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:44.256536  120983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:44.256548  120983 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:44.256552  120983 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:44.256797  120983 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:44.257141  120983 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:44.257500  120983 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:44.257518  120983 addons.go:607] checking whether the cluster is paused
	I1101 08:58:44.257671  120983 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:44.257693  120983 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:44.258242  120983 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:44.277149  120983 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:44.277205  120983 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:44.296305  120983 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:44.397024  120983 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:44.397161  120983 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:44.427003  120983 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:44.427024  120983 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:44.427034  120983 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:44.427039  120983 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:44.427042  120983 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:44.427046  120983 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:44.427049  120983 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:44.427053  120983 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:44.427057  120983 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:44.427064  120983 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:44.427068  120983 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:44.427072  120983 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:44.427077  120983 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:44.427081  120983 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:44.427086  120983 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:44.427101  120983 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:44.427108  120983 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:44.427113  120983 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:44.427117  120983 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:44.427120  120983 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:44.427125  120983 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:44.427139  120983 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:44.427147  120983 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:44.427152  120983 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:44.427159  120983 cri.go:89] found id: ""
	I1101 08:58:44.427205  120983 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:44.442129  120983 out.go:203] 
	W1101 08:58:44.443493  120983 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:44.443512  120983 out.go:285] * 
	* 
	W1101 08:58:44.446625  120983 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:44.447817  120983 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (13.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-993117 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-993117 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-993117 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [cd5600c5-f1ab-4a58-9b3c-63b65adcc0d5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [cd5600c5-f1ab-4a58-9b3c-63b65adcc0d5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [cd5600c5-f1ab-4a58-9b3c-63b65adcc0d5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.004072134s
addons_test.go:967: (dbg) Run:  kubectl --context addons-993117 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 ssh "cat /opt/local-path-provisioner/pvc-0365a22a-6c12-401f-8fad-405ba975828f_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-993117 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-993117 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (259.401547ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:48.596097  121319 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:48.596432  121319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:48.596443  121319 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:48.596450  121319 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:48.596664  121319 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:48.596990  121319 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:48.597349  121319 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:48.597367  121319 addons.go:607] checking whether the cluster is paused
	I1101 08:58:48.597483  121319 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:48.597509  121319 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:48.598000  121319 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:48.617740  121319 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:48.617801  121319 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:48.637708  121319 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:48.738728  121319 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:48.738804  121319 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:48.769004  121319 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:48.769043  121319 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:48.769048  121319 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:48.769054  121319 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:48.769059  121319 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:48.769064  121319 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:48.769068  121319 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:48.769073  121319 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:48.769077  121319 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:48.769091  121319 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:48.769095  121319 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:48.769099  121319 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:48.769103  121319 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:48.769107  121319 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:48.769111  121319 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:48.769131  121319 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:48.769140  121319 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:48.769146  121319 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:48.769150  121319 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:48.769154  121319 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:48.769158  121319 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:48.769162  121319 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:48.769166  121319 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:48.769170  121319 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:48.769174  121319 cri.go:89] found id: ""
	I1101 08:58:48.769232  121319 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:48.785586  121319 out.go:203] 
	W1101 08:58:48.787203  121319 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:48.787230  121319 out.go:285] * 
	* 
	W1101 08:58:48.791794  121319 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:48.793119  121319 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (13.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-hqm9x" [15bd754a-567b-486e-b302-958c6c35e01b] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003187468s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (262.417309ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:38.982903  119755 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:38.983037  119755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:38.983042  119755 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:38.983046  119755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:38.983238  119755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:38.983481  119755 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:38.983841  119755 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:38.983858  119755 addons.go:607] checking whether the cluster is paused
	I1101 08:58:38.983954  119755 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:38.983972  119755 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:38.984324  119755 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:39.002666  119755 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:39.002730  119755 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:39.021990  119755 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:39.129848  119755 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:39.130014  119755 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:39.160699  119755 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:39.160721  119755 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:39.160726  119755 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:39.160732  119755 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:39.160736  119755 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:39.160741  119755 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:39.160746  119755 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:39.160750  119755 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:39.160753  119755 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:39.160761  119755 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:39.160765  119755 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:39.160769  119755 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:39.160775  119755 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:39.160783  119755 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:39.160787  119755 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:39.160796  119755 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:39.160803  119755 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:39.160809  119755 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:39.160812  119755 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:39.160816  119755 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:39.160823  119755 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:39.160829  119755 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:39.160832  119755 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:39.160834  119755 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:39.160840  119755 cri.go:89] found id: ""
	I1101 08:58:39.160901  119755 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:39.175019  119755 out.go:203] 
	W1101 08:58:39.176691  119755 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:39.176716  119755 out.go:285] * 
	* 
	W1101 08:58:39.180394  119755 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:39.182200  119755 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-b4vdn" [c41076fa-8e84-4c53-a1a4-785c1dbf00e0] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003840146s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable yakd --alsologtostderr -v=1: exit status 11 (260.746608ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:35.416052  119453 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:35.416348  119453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:35.416361  119453 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:35.416368  119453 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:35.416675  119453 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:35.417037  119453 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:35.417510  119453 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:35.417528  119453 addons.go:607] checking whether the cluster is paused
	I1101 08:58:35.417659  119453 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:35.417683  119453 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:35.418204  119453 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:35.439076  119453 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:35.439343  119453 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:35.458059  119453 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:35.559458  119453 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:35.559540  119453 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:35.590669  119453 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:35.590700  119453 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:35.590706  119453 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:35.590711  119453 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:35.590715  119453 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:35.590721  119453 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:35.590726  119453 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:35.590731  119453 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:35.590735  119453 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:35.590753  119453 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:35.590758  119453 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:35.590761  119453 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:35.590763  119453 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:35.590766  119453 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:35.590768  119453 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:35.590781  119453 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:35.590788  119453 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:35.590794  119453 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:35.590798  119453 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:35.590802  119453 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:35.590806  119453 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:35.590814  119453 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:35.590818  119453 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:35.590825  119453 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:35.590835  119453 cri.go:89] found id: ""
	I1101 08:58:35.590892  119453 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:35.605361  119453 out.go:203] 
	W1101 08:58:35.606826  119453 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:35.606853  119453 out.go:285] * 
	* 
	W1101 08:58:35.610278  119453 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:35.611739  119453 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.27s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-ldw4v" [d8470b34-a718-4170-8f5a-08c89ef719f6] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003845017s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-993117 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-993117 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (248.489393ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 08:58:33.729315  119369 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:58:33.729645  119369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:33.729654  119369 out.go:374] Setting ErrFile to fd 2...
	I1101 08:58:33.729658  119369 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:58:33.729867  119369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:58:33.730131  119369 mustload.go:66] Loading cluster: addons-993117
	I1101 08:58:33.730502  119369 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:33.730517  119369 addons.go:607] checking whether the cluster is paused
	I1101 08:58:33.730605  119369 config.go:182] Loaded profile config "addons-993117": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 08:58:33.730621  119369 host.go:66] Checking if "addons-993117" exists ...
	I1101 08:58:33.731027  119369 cli_runner.go:164] Run: docker container inspect addons-993117 --format={{.State.Status}}
	I1101 08:58:33.750595  119369 ssh_runner.go:195] Run: systemctl --version
	I1101 08:58:33.750652  119369 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-993117
	I1101 08:58:33.768568  119369 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/addons-993117/id_rsa Username:docker}
	I1101 08:58:33.867492  119369 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 08:58:33.867590  119369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 08:58:33.896268  119369 cri.go:89] found id: "3b35b0d0701895a16385542e077578124e24f94b0a6e170eac032648e4c1b5ba"
	I1101 08:58:33.896295  119369 cri.go:89] found id: "8dc1437b901512d56f5baf4f5cee036100eb92285e3162ebad53278d26004219"
	I1101 08:58:33.896300  119369 cri.go:89] found id: "804b66311e9351cc2a8c06a9cfcc32abaac4497c2f242f720ad611f046dcca48"
	I1101 08:58:33.896305  119369 cri.go:89] found id: "2821b9f559e62bbb8fd89bd7dbaa789180010e3b031cc06c7f03f6f083d1108a"
	I1101 08:58:33.896310  119369 cri.go:89] found id: "b08eff5e2d492769ecddbebce4a670ede044973b12374e58f10410b67c46d459"
	I1101 08:58:33.896315  119369 cri.go:89] found id: "b6a9d7748ccc57555b8b8fbf5a5501c707b6b91e7d0bb503bade14421a2d185b"
	I1101 08:58:33.896320  119369 cri.go:89] found id: "751b58c8fd0aa6c096d8f9e269ff4e2810287a34022b50585f80aa115ad51b3e"
	I1101 08:58:33.896324  119369 cri.go:89] found id: "cf726f61ce62ef122bb5c168a60f9b357efd4e5e2d4b32f8ac642df6b4bbcc99"
	I1101 08:58:33.896328  119369 cri.go:89] found id: "cfc14b381b0aa80371a2c48f7595d764dae7fb241e30dab28da7a775383918a5"
	I1101 08:58:33.896335  119369 cri.go:89] found id: "10ebfd823db73a0aebcbf566a28775df8df6620be809983434902a6b043781d9"
	I1101 08:58:33.896339  119369 cri.go:89] found id: "7933addcfb16f05818d179858f8bcb8a23420cc70606d3e56bac974aef3cbede"
	I1101 08:58:33.896343  119369 cri.go:89] found id: "1fb99f095c842b25e5c61533ad26086df14ed4be80e0d7c10e92904b1fa66d8b"
	I1101 08:58:33.896346  119369 cri.go:89] found id: "847964df5e7f5c0828faef5a50c71c3a46dc74f89223de189a3aa86e2a048ae3"
	I1101 08:58:33.896348  119369 cri.go:89] found id: "903d4bbf18d4cb7142736fe70448b88407e91595b9eb0742874de072b370e2a7"
	I1101 08:58:33.896351  119369 cri.go:89] found id: "a0559bd812da6a92d8f4ad404c9f5ffbd174d17d4da388a8abd1ffa471e1a5aa"
	I1101 08:58:33.896358  119369 cri.go:89] found id: "0d2603d6222947762e038c9ee5a4c993b3dc0e4b2e20f0bd8839b9914920fe76"
	I1101 08:58:33.896367  119369 cri.go:89] found id: "d1be24b1775c4c66bb322093e8231608ba6e23cb809690d3216f3ba62c595818"
	I1101 08:58:33.896373  119369 cri.go:89] found id: "3bd1589cbc2c1ef584afc51e329a2f4694a6d2b2fb8e39039768f397a15ddede"
	I1101 08:58:33.896377  119369 cri.go:89] found id: "4d446343c7b2f8f7708665c6188bf80fbfeea6efc81a9050d38c043ec9d91352"
	I1101 08:58:33.896382  119369 cri.go:89] found id: "9838dcae88ecbeccbefa43c4aff8a8ca559822063b224ef71dd999e68dad7bcf"
	I1101 08:58:33.896386  119369 cri.go:89] found id: "4ff46c8fd9e8928a226f77421dfd843ceb103288440e6f6ca5f3ffbbd63f8719"
	I1101 08:58:33.896390  119369 cri.go:89] found id: "780e64dcae645909a54868d2eb6723be693454eba26cf99f555ba8166c3a9404"
	I1101 08:58:33.896402  119369 cri.go:89] found id: "1c79567a551066d153e7d93dc88b2c5e5aa492b3fb3bb2b2df36684689dd0286"
	I1101 08:58:33.896410  119369 cri.go:89] found id: "cc887abb01e9d6d9f747abf44f07a7324cabe708aa3052b050a5691f1dd22252"
	I1101 08:58:33.896414  119369 cri.go:89] found id: ""
	I1101 08:58:33.896459  119369 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 08:58:33.911282  119369 out.go:203] 
	W1101 08:58:33.912610  119369 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T08:58:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 08:58:33.912638  119369 out.go:285] * 
	* 
	W1101 08:58:33.915865  119369 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 08:58:33.917521  119369 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-993117 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-224473 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-224473 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-zf28f" [7ae82631-31df-4abf-8998-5fe8e07615ed] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-224473 -n functional-224473
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-01 09:14:42.057317987 +0000 UTC m=+1176.734283435
functional_test.go:1645: (dbg) Run:  kubectl --context functional-224473 describe po hello-node-connect-7d85dfc575-zf28f -n default
functional_test.go:1645: (dbg) kubectl --context functional-224473 describe po hello-node-connect-7d85dfc575-zf28f -n default:
Name:             hello-node-connect-7d85dfc575-zf28f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-224473/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:04:41 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fdbh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7fdbh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zf28f to functional-224473
Normal   Pulling    6m51s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m51s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m51s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-224473 logs hello-node-connect-7d85dfc575-zf28f -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-224473 logs hello-node-connect-7d85dfc575-zf28f -n default: exit status 1 (63.764254ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-zf28f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-224473 logs hello-node-connect-7d85dfc575-zf28f -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-224473 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-zf28f
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-224473/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:04:41 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fdbh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7fdbh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zf28f to functional-224473
Normal   Pulling    6m51s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m51s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m51s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m37s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-224473 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-224473 logs -l app=hello-node-connect: exit status 1 (63.441017ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-zf28f" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-224473 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-224473 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.97.56.133
IPs:                      10.97.56.133
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32658/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-224473
helpers_test.go:243: (dbg) docker inspect functional-224473:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2cb2995d2804d0882996eeb97d4c1bd99bbbe2e1364e46ce93de29ae99b2d8bd",
	        "Created": "2025-11-01T09:02:20.697412874Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 132206,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:02:20.732193782Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2cb2995d2804d0882996eeb97d4c1bd99bbbe2e1364e46ce93de29ae99b2d8bd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2cb2995d2804d0882996eeb97d4c1bd99bbbe2e1364e46ce93de29ae99b2d8bd/hostname",
	        "HostsPath": "/var/lib/docker/containers/2cb2995d2804d0882996eeb97d4c1bd99bbbe2e1364e46ce93de29ae99b2d8bd/hosts",
	        "LogPath": "/var/lib/docker/containers/2cb2995d2804d0882996eeb97d4c1bd99bbbe2e1364e46ce93de29ae99b2d8bd/2cb2995d2804d0882996eeb97d4c1bd99bbbe2e1364e46ce93de29ae99b2d8bd-json.log",
	        "Name": "/functional-224473",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-224473:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-224473",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2cb2995d2804d0882996eeb97d4c1bd99bbbe2e1364e46ce93de29ae99b2d8bd",
	                "LowerDir": "/var/lib/docker/overlay2/859acd4b4ebd1a19d355aa35c9a3ea144c45b7f348ee4d738c2386ca694a321f-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/859acd4b4ebd1a19d355aa35c9a3ea144c45b7f348ee4d738c2386ca694a321f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/859acd4b4ebd1a19d355aa35c9a3ea144c45b7f348ee4d738c2386ca694a321f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/859acd4b4ebd1a19d355aa35c9a3ea144c45b7f348ee4d738c2386ca694a321f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-224473",
	                "Source": "/var/lib/docker/volumes/functional-224473/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-224473",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-224473",
	                "name.minikube.sigs.k8s.io": "functional-224473",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "84132087d747aca8265c45e9cdf086026015d390741c53ebdd47acaa768c5348",
	            "SandboxKey": "/var/run/docker/netns/84132087d747",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-224473": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:b1:d9:f6:e6:ef",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6030d8d1735ba7196cd62902a21d784679d1918c449750287a2fab1f53a7703b",
	                    "EndpointID": "f42e62deafe4da0ae605748cb635d64db2400b98b6a725410d5b6d7b08d5b0f3",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-224473",
	                        "2cb2995d2804"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-224473 -n functional-224473
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-224473 logs -n 25: (1.314663887s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-224473 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ image          │ functional-224473 image save --daemon kicbase/echo-server:functional-224473 --alsologtostderr                             │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ ssh            │ functional-224473 ssh sudo cat /etc/ssl/certs/107955.pem                                                                  │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ ssh            │ functional-224473 ssh sudo cat /usr/share/ca-certificates/107955.pem                                                      │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ ssh            │ functional-224473 ssh sudo cat /etc/ssl/certs/51391683.0                                                                  │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ ssh            │ functional-224473 ssh sudo cat /etc/ssl/certs/1079552.pem                                                                 │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ ssh            │ functional-224473 ssh sudo cat /usr/share/ca-certificates/1079552.pem                                                     │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ ssh            │ functional-224473 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                  │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ dashboard      │ --url --port 36195 -p functional-224473 --alsologtostderr -v=1                                                            │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ ssh            │ functional-224473 ssh sudo cat /etc/test/nested/copy/107955/hosts                                                         │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ image          │ functional-224473 image ls --format short --alsologtostderr                                                               │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ image          │ functional-224473 image ls --format yaml --alsologtostderr                                                                │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:04 UTC │
	│ ssh            │ functional-224473 ssh pgrep buildkitd                                                                                     │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │                     │
	│ image          │ functional-224473 image build -t localhost/my-image:functional-224473 testdata/build --alsologtostderr                    │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:04 UTC │ 01 Nov 25 09:05 UTC │
	│ image          │ functional-224473 image ls                                                                                                │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │ 01 Nov 25 09:05 UTC │
	│ image          │ functional-224473 image ls --format json --alsologtostderr                                                                │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │ 01 Nov 25 09:05 UTC │
	│ image          │ functional-224473 image ls --format table --alsologtostderr                                                               │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │ 01 Nov 25 09:05 UTC │
	│ update-context │ functional-224473 update-context --alsologtostderr -v=2                                                                   │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │ 01 Nov 25 09:05 UTC │
	│ update-context │ functional-224473 update-context --alsologtostderr -v=2                                                                   │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │ 01 Nov 25 09:05 UTC │
	│ update-context │ functional-224473 update-context --alsologtostderr -v=2                                                                   │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:05 UTC │ 01 Nov 25 09:05 UTC │
	│ service        │ functional-224473 service list                                                                                            │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:14 UTC │ 01 Nov 25 09:14 UTC │
	│ service        │ functional-224473 service list -o json                                                                                    │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:14 UTC │ 01 Nov 25 09:14 UTC │
	│ service        │ functional-224473 service --namespace=default --https --url hello-node                                                    │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:14 UTC │                     │
	│ service        │ functional-224473 service hello-node --url --format={{.IP}}                                                               │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:14 UTC │                     │
	│ service        │ functional-224473 service hello-node --url                                                                                │ functional-224473 │ jenkins │ v1.37.0 │ 01 Nov 25 09:14 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:04:36
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:04:36.145186  143259 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:04:36.145347  143259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:04:36.145355  143259 out.go:374] Setting ErrFile to fd 2...
	I1101 09:04:36.145361  143259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:04:36.145828  143259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:04:36.146460  143259 out.go:368] Setting JSON to false
	I1101 09:04:36.147739  143259 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2814,"bootTime":1761985062,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:04:36.147874  143259 start.go:143] virtualization: kvm guest
	I1101 09:04:36.149694  143259 out.go:179] * [functional-224473] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 09:04:36.152345  143259 notify.go:221] Checking for updates...
	I1101 09:04:36.152379  143259 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:04:36.154754  143259 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:04:36.156755  143259 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:04:36.158729  143259 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:04:36.161238  143259 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:04:36.162300  143259 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:04:36.164051  143259 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:04:36.165186  143259 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:04:36.197665  143259 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:04:36.197856  143259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:04:36.289985  143259 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-01 09:04:36.274833733 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:04:36.290131  143259 docker.go:319] overlay module found
	I1101 09:04:36.292056  143259 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 09:04:36.293202  143259 start.go:309] selected driver: docker
	I1101 09:04:36.293227  143259 start.go:930] validating driver "docker" against &{Name:functional-224473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-224473 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:04:36.293464  143259 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:04:36.296020  143259 out.go:203] 
	W1101 09:04:36.298384  143259 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 09:04:36.299845  143259 out.go:203] 
	
	
	==> CRI-O <==
	Nov 01 09:04:55 functional-224473 crio[3550]: time="2025-11-01T09:04:55.528578011Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.332325114Z" level=info msg="Pulled image: docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da" id=c7ba1b17-a66b-49bb-ace7-6e0d1f044476 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.333069915Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=82cae076-684b-4de7-9ee6-53e9a9ac98b8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.335421472Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5ba3105a-eb71-4cf1-a1f2-470216d99cb3 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.336002576Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=0af78ebf-dfb6-4af4-a837-5fbbb5d42c73 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.336123578Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f9151c01-38b1-4509-8858-10a679dfd4a2 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.342492365Z" level=info msg="Creating container: default/mysql-5bb876957f-snjxf/mysql" id=613b2f96-981a-49d9-8c7a-eac91ba4c6ad name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.342661196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.349392901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.351048682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.382264595Z" level=info msg="Created container 8d98fac070800d7369a3c8ab661a9314fb6b361e7f64f98a94f8260768a22207: default/mysql-5bb876957f-snjxf/mysql" id=613b2f96-981a-49d9-8c7a-eac91ba4c6ad name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.383043909Z" level=info msg="Starting container: 8d98fac070800d7369a3c8ab661a9314fb6b361e7f64f98a94f8260768a22207" id=d3c9d1bc-d3e8-4f40-aac7-7d3790d6d8ba name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:05:02 functional-224473 crio[3550]: time="2025-11-01T09:05:02.385048016Z" level=info msg="Started container" PID=7463 containerID=8d98fac070800d7369a3c8ab661a9314fb6b361e7f64f98a94f8260768a22207 description=default/mysql-5bb876957f-snjxf/mysql id=d3c9d1bc-d3e8-4f40-aac7-7d3790d6d8ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b0a21eb87ef6dbe46ab8503635b5c57926788a40029cd52d28ae05739a7689f
	Nov 01 09:05:27 functional-224473 crio[3550]: time="2025-11-01T09:05:27.246769704Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=d320fc8a-a51a-40d8-9b24-d8d60f6a95d7 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:05:32 functional-224473 crio[3550]: time="2025-11-01T09:05:32.243335564Z" level=info msg="Stopping pod sandbox: 381dfe93568e8aaebdef7df4c070a801682783cfce72e073dc0b4189d9fb3662" id=f270dee5-ac28-4e48-b529-b4b047e7cde0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:05:32 functional-224473 crio[3550]: time="2025-11-01T09:05:32.243440261Z" level=info msg="Stopped pod sandbox (already stopped): 381dfe93568e8aaebdef7df4c070a801682783cfce72e073dc0b4189d9fb3662" id=f270dee5-ac28-4e48-b529-b4b047e7cde0 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 01 09:05:32 functional-224473 crio[3550]: time="2025-11-01T09:05:32.243863852Z" level=info msg="Removing pod sandbox: 381dfe93568e8aaebdef7df4c070a801682783cfce72e073dc0b4189d9fb3662" id=2de37370-c28b-4bcc-a859-828e9f801bf3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:05:32 functional-224473 crio[3550]: time="2025-11-01T09:05:32.247804135Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:05:32 functional-224473 crio[3550]: time="2025-11-01T09:05:32.247888919Z" level=info msg="Removed pod sandbox: 381dfe93568e8aaebdef7df4c070a801682783cfce72e073dc0b4189d9fb3662" id=2de37370-c28b-4bcc-a859-828e9f801bf3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 01 09:05:56 functional-224473 crio[3550]: time="2025-11-01T09:05:56.248984653Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=823fe6ce-3dba-49d0-9487-104f7b4c6675 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:06:18 functional-224473 crio[3550]: time="2025-11-01T09:06:18.24729467Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1c8f6898-ea0b-4490-9144-fc8266704f88 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:07:27 functional-224473 crio[3550]: time="2025-11-01T09:07:27.24708277Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b6ec0451-7427-41e9-8b6e-2bce1495d562 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:07:51 functional-224473 crio[3550]: time="2025-11-01T09:07:51.246611542Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4b0fa407-d891-48d8-84ff-079cb7fa1774 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:10:10 functional-224473 crio[3550]: time="2025-11-01T09:10:10.247398198Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=44d9ba94-d57d-4d92-a78a-c0f7ed422fab name=/runtime.v1.ImageService/PullImage
	Nov 01 09:10:32 functional-224473 crio[3550]: time="2025-11-01T09:10:32.250135862Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cbe40261-c8ca-4e0b-a13c-6112cfa7441d name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	8d98fac070800       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   3b0a21eb87ef6       mysql-5bb876957f-snjxf                       default
	7638c62a494ab       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   22a2c6492d42f       kubernetes-dashboard-855c9754f9-tc5d9        kubernetes-dashboard
	ec53c0637feec       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   ff4eb35bfdff7       dashboard-metrics-scraper-77bf4d6c4c-trhd7   kubernetes-dashboard
	dcaa79bdc06fd       docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58                  9 minutes ago       Running             myfrontend                  0                   48775f9681529       sp-pod                                       default
	4bf0289712301       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   d2c775bd00dfa       busybox-mount                                default
	027fc14425ba9       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   9832431c7cfa1       nginx-svc                                    default
	fdf1cd1763451       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   a2041c2de4ca2       storage-provisioner                          kube-system
	525eb0388dd04       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   a76d293e0ec10       kube-apiserver-functional-224473             kube-system
	662c443078af7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   17b377c47d802       kube-controller-manager-functional-224473    kube-system
	3ccdb3678d35c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   e32747505ba39       kube-scheduler-functional-224473             kube-system
	d8bba8aa29cc9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   92b79571a28b1       etcd-functional-224473                       kube-system
	abe90eeb7bb63       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   4032d6cfc7dbc       coredns-66bc5c9577-qt59v                     kube-system
	f4aeb4f7480df       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   a2041c2de4ca2       storage-provisioner                          kube-system
	85336468d2943       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   1f0546f4831f6       kube-proxy-8gz5n                             kube-system
	699c5d65e5408       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   6b6a8ea5a9d28       kindnet-6hrmd                                kube-system
	c8a2cbfbebd76       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   4032d6cfc7dbc       coredns-66bc5c9577-qt59v                     kube-system
	0a19cd873d5dc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   6b6a8ea5a9d28       kindnet-6hrmd                                kube-system
	90120e555f543       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 12 minutes ago      Exited              kube-proxy                  0                   1f0546f4831f6       kube-proxy-8gz5n                             kube-system
	1fdbd5adcba46       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   92b79571a28b1       etcd-functional-224473                       kube-system
	7c5132804cee4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 12 minutes ago      Exited              kube-controller-manager     0                   17b377c47d802       kube-controller-manager-functional-224473    kube-system
	120f07486ac43       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   e32747505ba39       kube-scheduler-functional-224473             kube-system
	
	
	==> coredns [abe90eeb7bb636aa3155f74d744d51e88704d4701fddd337a86011d373510e44] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59013 - 47868 "HINFO IN 6390829374125997002.5970299406080486298. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069681895s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c8a2cbfbebd7610d2a7b0e508d8fb8ffe768a935a8009aace3d09c52229c114e] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58557 - 57418 "HINFO IN 7592694354075577633.1874172125217621677. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020325982s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-224473
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-224473
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=functional-224473
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_02_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:02:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-224473
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:14:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:12:13 +0000   Sat, 01 Nov 2025 09:02:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:12:13 +0000   Sat, 01 Nov 2025 09:02:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:12:13 +0000   Sat, 01 Nov 2025 09:02:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:12:13 +0000   Sat, 01 Nov 2025 09:02:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-224473
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                583a8cd3-c0f1-4890-9010-b6a20e7ecf85
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-c6fx6                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-zf28f           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-snjxf                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m48s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  kube-system                 coredns-66bc5c9577-qt59v                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-224473                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-6hrmd                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-224473              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-224473     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-8gz5n                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-224473              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-trhd7    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tc5d9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x6 over 12m)  kubelet          Node functional-224473 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x6 over 12m)  kubelet          Node functional-224473 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x6 over 12m)  kubelet          Node functional-224473 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-224473 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-224473 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-224473 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-224473 event: Registered Node functional-224473 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-224473 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-224473 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-224473 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-224473 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-224473 event: Registered Node functional-224473 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 53 1e 0b f5 f9 08 06
	[ +20.616610] IPv4: martian source 10.244.0.1 from 10.244.0.54, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 5d 8b 4b c3 ca 08 06
	[Nov 1 08:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.063864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023900] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023945] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023903] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +2.047798] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[Nov 1 08:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +8.511341] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[ +16.382756] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[ +32.253538] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	
	
	==> etcd [1fdbd5adcba46a551ac15d0fadc78e3949f1edf19ae4eaf2ff576a536f31a1ba] <==
	{"level":"warn","ts":"2025-11-01T09:02:31.425259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:02:31.431633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:02:31.437417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:02:31.450089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:02:31.456461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:02:31.463532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:02:31.512274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51332","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:03:30.085404Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-01T09:03:30.085485Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-224473","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-01T09:03:30.085590Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:03:30.087109Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-01T09:03:30.087190Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:03:30.087200Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-01T09:03:30.087275Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-01T09:03:30.087289Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-01T09:03:30.087287Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:03:30.087347Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:03:30.087363Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-01T09:03:30.087287Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-01T09:03:30.087384Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-01T09:03:30.087391Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:03:30.088987Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-01T09:03:30.089043Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-01T09:03:30.089069Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-01T09:03:30.089075Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-224473","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [d8bba8aa29cc945e41781abd8aabc9225b12ef5574ea61ab47d0f7a27dad2a02] <==
	{"level":"warn","ts":"2025-11-01T09:03:52.910230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:52.916723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:52.922884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:52.930375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:52.936745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:52.943525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:52.962773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:52.969104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:52.975759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:52.982255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:52.988875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:52.996368Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:53.003701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:53.011578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:53.019055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:53.026746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:53.043196Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:53.047767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:53.055115Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:53.063869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:03:53.119290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39720","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:05:03.392823Z","caller":"traceutil/trace.go:172","msg":"trace[939508249] transaction","detail":"{read_only:false; response_revision:855; number_of_response:1; }","duration":"154.599741ms","start":"2025-11-01T09:05:03.238199Z","end":"2025-11-01T09:05:03.392799Z","steps":["trace[939508249] 'process raft request'  (duration: 154.303793ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:13:52.584384Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1130}
	{"level":"info","ts":"2025-11-01T09:13:52.603868Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1130,"took":"19.115803ms","hash":275551160,"current-db-size-bytes":3350528,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1499136,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-01T09:13:52.603929Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":275551160,"revision":1130,"compact-revision":-1}
	
	
	==> kernel <==
	 09:14:43 up 57 min,  0 user,  load average: 0.11, 0.22, 0.48
	Linux functional-224473 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0a19cd873d5dc088911b3819e828bde932dbe48e7e78d90e8dbcc265ad3fd909] <==
	I1101 09:02:40.540790       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:02:40.541076       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1101 09:02:40.541219       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:02:40.541234       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:02:40.541253       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:02:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:02:40.836261       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:02:40.836331       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:02:40.836346       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:02:40.836490       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:02:41.136446       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:02:41.136467       1 metrics.go:72] Registering metrics
	I1101 09:02:41.136523       1 controller.go:711] "Syncing nftables rules"
	I1101 09:02:50.836229       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:02:50.836307       1 main.go:301] handling current node
	I1101 09:03:00.840325       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:03:00.840362       1 main.go:301] handling current node
	I1101 09:03:10.839173       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:03:10.839216       1 main.go:301] handling current node
	
	
	==> kindnet [699c5d65e5408d79b39bb4f58d4ab9c0a8fe40e51ee6b1a0c91794e112346619] <==
	I1101 09:12:40.116532       1 main.go:301] handling current node
	I1101 09:12:50.109450       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:12:50.109482       1 main.go:301] handling current node
	I1101 09:13:00.109428       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:13:00.109457       1 main.go:301] handling current node
	I1101 09:13:10.117443       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:13:10.117475       1 main.go:301] handling current node
	I1101 09:13:20.110953       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:13:20.110999       1 main.go:301] handling current node
	I1101 09:13:30.108735       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:13:30.108784       1 main.go:301] handling current node
	I1101 09:13:40.110620       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:13:40.110691       1 main.go:301] handling current node
	I1101 09:13:50.111267       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:13:50.111309       1 main.go:301] handling current node
	I1101 09:14:00.116042       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:14:00.116083       1 main.go:301] handling current node
	I1101 09:14:10.118233       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:14:10.118278       1 main.go:301] handling current node
	I1101 09:14:20.109316       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:14:20.109347       1 main.go:301] handling current node
	I1101 09:14:30.112207       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:14:30.112235       1 main.go:301] handling current node
	I1101 09:14:40.110964       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1101 09:14:40.111026       1 main.go:301] handling current node
	
	
	==> kube-apiserver [525eb0388dd0451e2e98679b9b9f671f0152647b1e450d6d5d3913feb2c87a3b] <==
	I1101 09:03:53.638812       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:03:54.518776       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1101 09:03:54.723721       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1101 09:03:54.725466       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:03:54.730554       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:03:55.119080       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:03:55.207215       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:03:55.215236       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:03:55.274251       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:03:55.281706       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:03:57.288235       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:04:20.858582       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.106.66"}
	I1101 09:04:24.914132       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.204.30"}
	I1101 09:04:26.441965       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.106.252.107"}
	I1101 09:04:41.708345       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.97.56.133"}
	E1101 09:04:45.238681       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:32958: use of closed network connection
	I1101 09:04:47.789793       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:04:47.917033       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.73.104"}
	I1101 09:04:47.928218       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.114.113"}
	E1101 09:04:55.026462       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:55936: use of closed network connection
	I1101 09:04:55.153278       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.108.186.116"}
	E1101 09:05:09.276538       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37938: use of closed network connection
	E1101 09:05:10.180672       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37956: use of closed network connection
	E1101 09:05:11.747703       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37964: use of closed network connection
	I1101 09:13:53.540639       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [662c443078af735b747e2cc2286aed7e42cb1549c98ad6a4e22fdb6a1f3b125c] <==
	I1101 09:03:56.928015       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:03:56.928104       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:03:56.928129       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:03:56.928137       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:03:56.928142       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:03:56.931322       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:03:56.933530       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:03:56.933570       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:03:56.933596       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:03:56.933720       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1101 09:03:56.933746       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:03:56.935058       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:03:56.937256       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:03:56.938465       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:03:56.940029       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:03:56.943306       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:03:56.946599       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:03:56.948950       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:03:56.958288       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1101 09:04:47.842636       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:04:47.846534       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:04:47.850634       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:04:47.853173       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:04:47.854903       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1101 09:04:47.859080       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [7c5132804cee4ba5df4e1d9ecf1405704f46178b3a1cb6955777cd915e082c9d] <==
	I1101 09:02:38.879849       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-224473"
	I1101 09:02:38.879893       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:02:38.879932       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:02:38.879991       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:02:38.880496       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:02:38.880595       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:02:38.880824       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:02:38.880936       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:02:38.881038       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:02:38.881478       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:02:38.881490       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:02:38.881497       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:02:38.882488       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:02:38.882676       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:02:38.884766       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:02:38.884801       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:02:38.884823       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:02:38.884827       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:02:38.884831       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:02:38.884959       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:02:38.884985       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:02:38.891633       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-224473" podCIDRs=["10.244.0.0/24"]
	I1101 09:02:38.892606       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:02:38.902278       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:02:53.881968       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [85336468d29434338c7f7752b17530c6234915334dac3f71714606c3526d1a42] <==
	E1101 09:03:19.799690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-224473&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:03:21.234302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-224473&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:03:24.075268       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-224473&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:03:28.366034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-224473&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:03:46.363205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-224473&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1101 09:04:05.199440       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:04:05.199476       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:04:05.199556       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:04:05.220576       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:04:05.220628       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:04:05.226535       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:04:05.226996       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:04:05.227034       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:04:05.228358       1 config.go:200] "Starting service config controller"
	I1101 09:04:05.228384       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:04:05.228386       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:04:05.228404       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:04:05.228420       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:04:05.228428       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:04:05.228641       1 config.go:309] "Starting node config controller"
	I1101 09:04:05.228678       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:04:05.228686       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:04:05.328669       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:04:05.328663       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:04:05.328716       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [90120e555f5430575ac40a1bc9bdc560980279b15837d0547db13e303aa29115] <==
	I1101 09:02:40.399248       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:02:40.465825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:02:40.566297       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:02:40.566365       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1101 09:02:40.566529       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:02:40.585830       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:02:40.585894       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:02:40.591416       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:02:40.591836       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:02:40.591861       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:02:40.592938       1 config.go:200] "Starting service config controller"
	I1101 09:02:40.592958       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:02:40.592997       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:02:40.592994       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:02:40.593004       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:02:40.593011       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:02:40.593069       1 config.go:309] "Starting node config controller"
	I1101 09:02:40.593082       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:02:40.693506       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:02:40.693547       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:02:40.693556       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:02:40.693581       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [120f07486ac43f5cf02dc014744838e67d0b4ef68f673ac189ed3f805fe76271] <==
	E1101 09:02:31.916464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:02:31.916563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:02:31.916820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:02:31.916929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:02:31.917041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:02:31.917181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:02:31.917194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:02:32.746153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:02:32.798799       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:02:32.804137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:02:32.847954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:02:32.953056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:02:32.975220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:02:33.001429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:02:33.012646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:02:33.062951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:02:33.175620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:02:33.192970       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1101 09:02:35.412756       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:03:30.304201       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1101 09:03:30.304279       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:03:30.304322       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1101 09:03:30.304347       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1101 09:03:30.304377       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1101 09:03:30.304400       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [3ccdb3678d35c958d403d452eded12c506398c37fe77a9c46ec0214f906af568] <==
	I1101 09:03:53.309303       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:03:53.909697       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:03:53.909725       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:03:53.915235       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:03:53.915271       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:03:53.915282       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:03:53.915291       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:03:53.915247       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:03:53.915338       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:03:53.915519       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:03:53.915797       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:03:54.015865       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:03:54.015885       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:03:54.015976       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:12:05 functional-224473 kubelet[4091]: E1101 09:12:05.245779    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:12:08 functional-224473 kubelet[4091]: E1101 09:12:08.246691    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:12:19 functional-224473 kubelet[4091]: E1101 09:12:19.246600    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:12:23 functional-224473 kubelet[4091]: E1101 09:12:23.246645    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:12:34 functional-224473 kubelet[4091]: E1101 09:12:34.247022    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:12:38 functional-224473 kubelet[4091]: E1101 09:12:38.246661    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:12:49 functional-224473 kubelet[4091]: E1101 09:12:49.245991    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:12:52 functional-224473 kubelet[4091]: E1101 09:12:52.247236    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:13:00 functional-224473 kubelet[4091]: E1101 09:13:00.246477    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:13:06 functional-224473 kubelet[4091]: E1101 09:13:06.246614    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:13:12 functional-224473 kubelet[4091]: E1101 09:13:12.246667    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:13:19 functional-224473 kubelet[4091]: E1101 09:13:19.246193    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:13:25 functional-224473 kubelet[4091]: E1101 09:13:25.246069    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:13:32 functional-224473 kubelet[4091]: E1101 09:13:32.246895    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:13:36 functional-224473 kubelet[4091]: E1101 09:13:36.246951    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:13:43 functional-224473 kubelet[4091]: E1101 09:13:43.245781    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:13:47 functional-224473 kubelet[4091]: E1101 09:13:47.246834    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:13:57 functional-224473 kubelet[4091]: E1101 09:13:57.246372    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:14:00 functional-224473 kubelet[4091]: E1101 09:14:00.248155    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:14:08 functional-224473 kubelet[4091]: E1101 09:14:08.246671    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:14:14 functional-224473 kubelet[4091]: E1101 09:14:14.246156    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:14:20 functional-224473 kubelet[4091]: E1101 09:14:20.248997    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:14:29 functional-224473 kubelet[4091]: E1101 09:14:29.246824    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	Nov 01 09:14:33 functional-224473 kubelet[4091]: E1101 09:14:33.246538    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-c6fx6" podUID="d8d9a21c-6628-47e5-840f-b28718514083"
	Nov 01 09:14:43 functional-224473 kubelet[4091]: E1101 09:14:43.246577    4091 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-zf28f" podUID="7ae82631-31df-4abf-8998-5fe8e07615ed"
	
	
	==> kubernetes-dashboard [7638c62a494ab5dd34f376047de62a8afd7cb3df87e35d0df0cdc03d6060bf20] <==
	2025/11/01 09:04:53 Starting overwatch
	2025/11/01 09:04:53 Using namespace: kubernetes-dashboard
	2025/11/01 09:04:53 Using in-cluster config to connect to apiserver
	2025/11/01 09:04:53 Using secret token for csrf signing
	2025/11/01 09:04:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:04:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:04:53 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:04:53 Generating JWE encryption key
	2025/11/01 09:04:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:04:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:04:54 Initializing JWE encryption key from synchronized object
	2025/11/01 09:04:54 Creating in-cluster Sidecar client
	2025/11/01 09:04:54 Successful request to sidecar
	2025/11/01 09:04:54 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [f4aeb4f7480df55b1e83d3a87e734d0529f3e808fca77af4c03d20e90be168bc] <==
	I1101 09:03:19.691998       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:03:19.695600       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [fdf1cd176345144b119c57ef569fc667dc6f38cf098598488e7e804caf6fe9e5] <==
	W1101 09:14:19.608517       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:21.611765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:21.615832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:23.618771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:23.623641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:25.627277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:25.633070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:27.636093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:27.640359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:29.643826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:29.649312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:31.652563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:31.656837       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:33.660428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:33.664377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:35.667883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:35.671981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:37.674605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:37.678676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:39.681689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:39.685530       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:41.689211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:41.694434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:43.698104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:14:43.702675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-224473 -n functional-224473
helpers_test.go:269: (dbg) Run:  kubectl --context functional-224473 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-c6fx6 hello-node-connect-7d85dfc575-zf28f
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-224473 describe pod busybox-mount hello-node-75c85bcc94-c6fx6 hello-node-connect-7d85dfc575-zf28f
helpers_test.go:290: (dbg) kubectl --context functional-224473 describe pod busybox-mount hello-node-75c85bcc94-c6fx6 hello-node-connect-7d85dfc575-zf28f:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-224473/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 09:04:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://4bf02897123017b2ce3996f468400c3509121301832b7c268d4ae04c515c4510
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 01 Nov 2025 09:04:34 +0000
	      Finished:     Sat, 01 Nov 2025 09:04:34 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zj5l9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-zj5l9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-224473
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.154s (3.154s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-c6fx6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-224473/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 09:04:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-klrcd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-klrcd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-c6fx6 to functional-224473
	  Normal   Pulling    7m17s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m17s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m17s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    11s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     11s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-zf28f
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-224473/192.168.49.2
	Start Time:       Sat, 01 Nov 2025 09:04:41 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7fdbh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7fdbh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-zf28f to functional-224473
	  Normal   Pulling    6m53s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m53s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m53s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x41 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x41 over 10m)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-224473 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-224473 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-c6fx6" [d8d9a21c-6628-47e5-840f-b28718514083] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-224473 -n functional-224473
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-01 09:14:25.270416343 +0000 UTC m=+1159.947381792
functional_test.go:1460: (dbg) Run:  kubectl --context functional-224473 describe po hello-node-75c85bcc94-c6fx6 -n default
functional_test.go:1460: (dbg) kubectl --context functional-224473 describe po hello-node-75c85bcc94-c6fx6 -n default:
Name:             hello-node-75c85bcc94-c6fx6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-224473/192.168.49.2
Start Time:       Sat, 01 Nov 2025 09:04:24 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-klrcd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-klrcd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-c6fx6 to functional-224473
Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m52s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-224473 logs hello-node-75c85bcc94-c6fx6 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-224473 logs hello-node-75c85bcc94-c6fx6 -n default: exit status 1 (71.533701ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-c6fx6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-224473 logs hello-node-75c85bcc94-c6fx6 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image load --daemon kicbase/echo-server:functional-224473 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-224473" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image load --daemon kicbase/echo-server:functional-224473 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-224473" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-224473
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image load --daemon kicbase/echo-server:functional-224473 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-224473" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image save kicbase/echo-server:functional-224473 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1101 09:04:44.626697  146145 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:04:44.627039  146145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:04:44.627054  146145 out.go:374] Setting ErrFile to fd 2...
	I1101 09:04:44.627058  146145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:04:44.627270  146145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:04:44.627895  146145 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:04:44.628042  146145 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:04:44.628461  146145 cli_runner.go:164] Run: docker container inspect functional-224473 --format={{.State.Status}}
	I1101 09:04:44.647562  146145 ssh_runner.go:195] Run: systemctl --version
	I1101 09:04:44.647643  146145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224473
	I1101 09:04:44.667465  146145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/functional-224473/id_rsa Username:docker}
	I1101 09:04:44.766751  146145 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1101 09:04:44.766828  146145 cache_images.go:255] Failed to load cached images for "functional-224473": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1101 09:04:44.766862  146145 cache_images.go:267] failed pushing to: functional-224473

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-224473
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image save --daemon kicbase/echo-server:functional-224473 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-224473
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-224473: exit status 1 (18.583459ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-224473

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-224473

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 service --namespace=default --https --url hello-node: exit status 115 (548.711563ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31852
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-224473 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 service hello-node --url --format={{.IP}}: exit status 115 (549.840457ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-224473 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 service hello-node --url: exit status 115 (546.297302ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31852
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-224473 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31852
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.55s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-710213 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-710213 --output=json --user=testUser: exit status 80 (1.681547674s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"38f1b7b6-1c44-478a-b265-97cc3f7b5cf3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-710213 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"bded72f5-38bd-448e-8064-39b7c5ff1afc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T09:22:55Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"b9872a2d-602d-4ef4-935d-25e0270fc203","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-710213 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-710213 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-710213 --output=json --user=testUser: exit status 80 (1.766055876s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2d78aaec-fef1-4dd9-89c5-a4724e89ec51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-710213 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"9f1176bc-6787-42f5-8741-aa622cb91689","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-01T09:22:57Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"4c7e8654-b4c7-4559-b61d-3666921ab28e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-710213 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.77s)

                                                
                                    
x
+
TestPause/serial/Pause (7.97s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-902975 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-902975 --alsologtostderr -v=5: exit status 80 (2.482695699s)

                                                
                                                
-- stdout --
	* Pausing node pause-902975 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:36:32.447938  297770 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:36:32.448030  297770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:36:32.448039  297770 out.go:374] Setting ErrFile to fd 2...
	I1101 09:36:32.448042  297770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:36:32.448232  297770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:36:32.448448  297770 out.go:368] Setting JSON to false
	I1101 09:36:32.448482  297770 mustload.go:66] Loading cluster: pause-902975
	I1101 09:36:32.448837  297770 config.go:182] Loaded profile config "pause-902975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:36:32.449271  297770 cli_runner.go:164] Run: docker container inspect pause-902975 --format={{.State.Status}}
	I1101 09:36:32.469808  297770 host.go:66] Checking if "pause-902975" exists ...
	I1101 09:36:32.470161  297770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:36:32.533158  297770 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-01 09:36:32.520989452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:36:32.533776  297770 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-902975 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:36:32.535702  297770 out.go:179] * Pausing node pause-902975 ... 
	I1101 09:36:32.537829  297770 host.go:66] Checking if "pause-902975" exists ...
	I1101 09:36:32.538176  297770 ssh_runner.go:195] Run: systemctl --version
	I1101 09:36:32.538220  297770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-902975
	I1101 09:36:32.559302  297770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32978 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/pause-902975/id_rsa Username:docker}
	I1101 09:36:32.663622  297770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:36:32.677162  297770 pause.go:52] kubelet running: true
	I1101 09:36:32.677232  297770 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:36:32.845561  297770 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:36:32.845700  297770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:36:32.967674  297770 cri.go:89] found id: "71b8d1d6c2ee08888694bb218f0ca907500db3e5f72ee39a90ecb0d21465f22d"
	I1101 09:36:32.967699  297770 cri.go:89] found id: "dd8cedf62a3d2368c72d2d988319c5255e252ef0120a3aeb588100c9cf6eadd1"
	I1101 09:36:32.967704  297770 cri.go:89] found id: "1e42f625fe2d5fc1f929ca2ebc8efab89b4da0503bf76e4280b90f200452e532"
	I1101 09:36:32.967708  297770 cri.go:89] found id: "669fc4f74aec78463a493d4a46267a7870a5dccdab4cbdc8910f092fd3f54377"
	I1101 09:36:32.967712  297770 cri.go:89] found id: "61f8ff1cbeb42879157dbc67a68f842c5f0ad25acbe02d4eb9d5c1babec228a7"
	I1101 09:36:32.967716  297770 cri.go:89] found id: "0f8ecf9e59d3a7c95293acc9cb3817e3e41a75c5fe78a517a7ffcd24e116a3a3"
	I1101 09:36:32.967720  297770 cri.go:89] found id: "287fc0673cdca8876c33403572eaa2834852ba185e0ca80e84089fe7470bf64b"
	I1101 09:36:32.967724  297770 cri.go:89] found id: ""
	I1101 09:36:32.967766  297770 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:36:32.989777  297770 retry.go:31] will retry after 193.120058ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:36:32Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:36:33.183488  297770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:36:33.203489  297770 pause.go:52] kubelet running: false
	I1101 09:36:33.203624  297770 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:36:33.387442  297770 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:36:33.387552  297770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:36:33.485447  297770 cri.go:89] found id: "71b8d1d6c2ee08888694bb218f0ca907500db3e5f72ee39a90ecb0d21465f22d"
	I1101 09:36:33.485492  297770 cri.go:89] found id: "dd8cedf62a3d2368c72d2d988319c5255e252ef0120a3aeb588100c9cf6eadd1"
	I1101 09:36:33.485499  297770 cri.go:89] found id: "1e42f625fe2d5fc1f929ca2ebc8efab89b4da0503bf76e4280b90f200452e532"
	I1101 09:36:33.485504  297770 cri.go:89] found id: "669fc4f74aec78463a493d4a46267a7870a5dccdab4cbdc8910f092fd3f54377"
	I1101 09:36:33.485508  297770 cri.go:89] found id: "61f8ff1cbeb42879157dbc67a68f842c5f0ad25acbe02d4eb9d5c1babec228a7"
	I1101 09:36:33.485513  297770 cri.go:89] found id: "0f8ecf9e59d3a7c95293acc9cb3817e3e41a75c5fe78a517a7ffcd24e116a3a3"
	I1101 09:36:33.485519  297770 cri.go:89] found id: "287fc0673cdca8876c33403572eaa2834852ba185e0ca80e84089fe7470bf64b"
	I1101 09:36:33.485523  297770 cri.go:89] found id: ""
	I1101 09:36:33.485575  297770 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:36:33.507166  297770 retry.go:31] will retry after 313.480823ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:36:33Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:36:33.821703  297770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:36:33.839260  297770 pause.go:52] kubelet running: false
	I1101 09:36:33.839347  297770 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:36:34.002481  297770 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:36:34.002575  297770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:36:34.101973  297770 cri.go:89] found id: "71b8d1d6c2ee08888694bb218f0ca907500db3e5f72ee39a90ecb0d21465f22d"
	I1101 09:36:34.102000  297770 cri.go:89] found id: "dd8cedf62a3d2368c72d2d988319c5255e252ef0120a3aeb588100c9cf6eadd1"
	I1101 09:36:34.102005  297770 cri.go:89] found id: "1e42f625fe2d5fc1f929ca2ebc8efab89b4da0503bf76e4280b90f200452e532"
	I1101 09:36:34.102010  297770 cri.go:89] found id: "669fc4f74aec78463a493d4a46267a7870a5dccdab4cbdc8910f092fd3f54377"
	I1101 09:36:34.102024  297770 cri.go:89] found id: "61f8ff1cbeb42879157dbc67a68f842c5f0ad25acbe02d4eb9d5c1babec228a7"
	I1101 09:36:34.102028  297770 cri.go:89] found id: "0f8ecf9e59d3a7c95293acc9cb3817e3e41a75c5fe78a517a7ffcd24e116a3a3"
	I1101 09:36:34.102033  297770 cri.go:89] found id: "287fc0673cdca8876c33403572eaa2834852ba185e0ca80e84089fe7470bf64b"
	I1101 09:36:34.102036  297770 cri.go:89] found id: ""
	I1101 09:36:34.102080  297770 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:36:34.117319  297770 retry.go:31] will retry after 376.325058ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:36:34Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:36:34.494564  297770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:36:34.516189  297770 pause.go:52] kubelet running: false
	I1101 09:36:34.516261  297770 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:36:34.730147  297770 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:36:34.730241  297770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:36:34.835618  297770 cri.go:89] found id: "71b8d1d6c2ee08888694bb218f0ca907500db3e5f72ee39a90ecb0d21465f22d"
	I1101 09:36:34.835657  297770 cri.go:89] found id: "dd8cedf62a3d2368c72d2d988319c5255e252ef0120a3aeb588100c9cf6eadd1"
	I1101 09:36:34.835664  297770 cri.go:89] found id: "1e42f625fe2d5fc1f929ca2ebc8efab89b4da0503bf76e4280b90f200452e532"
	I1101 09:36:34.835669  297770 cri.go:89] found id: "669fc4f74aec78463a493d4a46267a7870a5dccdab4cbdc8910f092fd3f54377"
	I1101 09:36:34.835673  297770 cri.go:89] found id: "61f8ff1cbeb42879157dbc67a68f842c5f0ad25acbe02d4eb9d5c1babec228a7"
	I1101 09:36:34.835677  297770 cri.go:89] found id: "0f8ecf9e59d3a7c95293acc9cb3817e3e41a75c5fe78a517a7ffcd24e116a3a3"
	I1101 09:36:34.835681  297770 cri.go:89] found id: "287fc0673cdca8876c33403572eaa2834852ba185e0ca80e84089fe7470bf64b"
	I1101 09:36:34.835686  297770 cri.go:89] found id: ""
	I1101 09:36:34.835747  297770 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:36:34.854831  297770 out.go:203] 
	W1101 09:36:34.856043  297770 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:36:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:36:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:36:34.856065  297770 out.go:285] * 
	* 
	W1101 09:36:34.862511  297770 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:36:34.863848  297770 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-902975 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-902975
helpers_test.go:243: (dbg) docker inspect pause-902975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611",
	        "Created": "2025-11-01T09:35:45.180634233Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282651,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:35:45.282012825Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611/hosts",
	        "LogPath": "/var/lib/docker/containers/d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611/d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611-json.log",
	        "Name": "/pause-902975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-902975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-902975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611",
	                "LowerDir": "/var/lib/docker/overlay2/dfb3470d38220d408b2ada854824354d3e5e96e16109271833ff8579b04d8308-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dfb3470d38220d408b2ada854824354d3e5e96e16109271833ff8579b04d8308/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dfb3470d38220d408b2ada854824354d3e5e96e16109271833ff8579b04d8308/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dfb3470d38220d408b2ada854824354d3e5e96e16109271833ff8579b04d8308/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-902975",
	                "Source": "/var/lib/docker/volumes/pause-902975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-902975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-902975",
	                "name.minikube.sigs.k8s.io": "pause-902975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58199ea573559524513512b94bd78ea8eba309d717a99673ed7319002de1d1fb",
	            "SandboxKey": "/var/run/docker/netns/58199ea57355",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-902975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:ac:84:50:7c:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d401f3fc6132d2e5da7d1c5eb6ef560481e0d3d7a34dac33b55f9b7ab89d40f6",
	                    "EndpointID": "b701d3701d87d4b66bca2cff824b908599ac4e4fa5f2b4c3864e14d64f91c972",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-902975",
	                        "d6d5d46e2a49"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-902975 -n pause-902975
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-902975 -n pause-902975: exit status 2 (481.566343ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-902975 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-902975 logs -n 25: (1.092759739s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-307390 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo cri-dockerd --version                                                                                 │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo systemctl cat containerd --no-pager                                                                   │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo cat /etc/containerd/config.toml                                                                       │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo containerd config dump                                                                                │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo systemctl cat crio --no-pager                                                                         │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo crio config                                                                                           │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ delete  │ -p cilium-307390                                                                                                            │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │ 01 Nov 25 09:35 UTC │
	│ start   │ -p pause-902975 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-902975              │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │ 01 Nov 25 09:36 UTC │
	│ delete  │ -p offline-crio-203516                                                                                                      │ offline-crio-203516       │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ stop    │ stopped-upgrade-228852 stop                                                                                                 │ stopped-upgrade-228852    │ jenkins │ v1.32.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ start   │ -p NoKubernetes-481344 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-481344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	│ start   │ -p NoKubernetes-481344 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-481344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ start   │ -p running-upgrade-256879 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ running-upgrade-256879    │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ start   │ -p stopped-upgrade-228852 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ stopped-upgrade-228852    │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ start   │ -p pause-902975 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-902975              │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ delete  │ -p stopped-upgrade-228852                                                                                                   │ stopped-upgrade-228852    │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ pause   │ -p pause-902975 --alsologtostderr -v=5                                                                                      │ pause-902975              │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	│ start   │ -p force-systemd-flag-281143 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-281143 │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	│ start   │ -p NoKubernetes-481344 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-481344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	│ delete  │ -p running-upgrade-256879                                                                                                   │ running-upgrade-256879    │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:36:34
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:36:34.423193  298685 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:36:34.423666  298685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:36:34.423736  298685 out.go:374] Setting ErrFile to fd 2...
	I1101 09:36:34.423745  298685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:36:34.425435  298685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:36:34.426067  298685 out.go:368] Setting JSON to false
	I1101 09:36:34.428126  298685 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4732,"bootTime":1761985062,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:36:34.428377  298685 start.go:143] virtualization: kvm guest
	I1101 09:36:34.430948  298685 out.go:179] * [NoKubernetes-481344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:36:34.433093  298685 notify.go:221] Checking for updates...
	I1101 09:36:34.434982  298685 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:36:34.436431  298685 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:36:34.437826  298685 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:36:34.439279  298685 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:36:34.441002  298685 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:36:34.442457  298685 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:36:34.355290  298639 config.go:182] Loaded profile config "NoKubernetes-481344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:36:34.355865  298639 config.go:182] Loaded profile config "pause-902975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:36:34.356112  298639 config.go:182] Loaded profile config "running-upgrade-256879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 09:36:34.356412  298639 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:36:34.395207  298639 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:36:34.395470  298639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:36:34.492781  298639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-01 09:36:34.480204724 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:36:34.492961  298639 docker.go:319] overlay module found
	I1101 09:36:34.496849  298639 out.go:179] * Using the docker driver based on user configuration
	I1101 09:36:34.444391  298685 config.go:182] Loaded profile config "NoKubernetes-481344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:36:34.445288  298685 start.go:1904] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1101 09:36:34.445410  298685 start.go:1809] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1101 09:36:34.445447  298685 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:36:34.487965  298685 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:36:34.488071  298685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:36:34.618530  298685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:36:34.599041576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:36:34.618686  298685 docker.go:319] overlay module found
	I1101 09:36:34.498140  298639 start.go:309] selected driver: docker
	I1101 09:36:34.498163  298639 start.go:930] validating driver "docker" against <nil>
	I1101 09:36:34.498180  298639 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:36:34.498890  298639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:36:34.618928  298639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:36:34.599041576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:36:34.619720  298639 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:36:34.620255  298639 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:36:34.621223  298685 out.go:179] * Using the docker driver based on existing profile
	I1101 09:36:34.622102  298639 out.go:179] * Using Docker driver with root privileges
	I1101 09:36:34.623850  298639 cni.go:84] Creating CNI manager for ""
	I1101 09:36:34.623950  298639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:36:34.623990  298639 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:36:34.624094  298639 start.go:353] cluster config:
	{Name:force-systemd-flag-281143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-281143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:36:34.626707  298639 out.go:179] * Starting "force-systemd-flag-281143" primary control-plane node in "force-systemd-flag-281143" cluster
	I1101 09:36:34.631332  298639 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:36:34.632616  298639 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:36:34.633797  298639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:36:34.633843  298639 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:36:34.633861  298639 cache.go:59] Caching tarball of preloaded images
	I1101 09:36:34.633888  298639 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:36:34.633999  298639 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:36:34.634013  298639 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:36:34.634149  298639 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/force-systemd-flag-281143/config.json ...
	I1101 09:36:34.634174  298639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/force-systemd-flag-281143/config.json: {Name:mke43da51586a1a9a5e5259ed38555d7cae2a12d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:34.662623  298639 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:36:34.662672  298639 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:36:34.662691  298639 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:36:34.662751  298639 start.go:360] acquireMachinesLock for force-systemd-flag-281143: {Name:mk8130c024d31558218158c20093fcabff0fe379 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:36:34.663181  298639 start.go:364] duration metric: took 393.244µs to acquireMachinesLock for "force-systemd-flag-281143"
	I1101 09:36:34.663253  298639 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-281143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-281143 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:36:34.663366  298639 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:36:34.622995  298685 start.go:309] selected driver: docker
	I1101 09:36:34.623457  298685 start.go:930] validating driver "docker" against &{Name:NoKubernetes-481344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-481344 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:36:34.623761  298685 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:36:34.625997  298685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:36:34.733548  298685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:36:34.715145929 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:36:34.733752  298685 start.go:1904] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1101 09:36:34.734026  298685 cni.go:84] Creating CNI manager for ""
	I1101 09:36:34.734113  298685 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:36:34.734132  298685 start.go:1904] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1101 09:36:34.734213  298685 start.go:353] cluster config:
	{Name:NoKubernetes-481344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-481344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 09:36:34.739403  298685 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-481344
	I1101 09:36:34.740577  298685 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:36:34.741655  298685 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:36:34.146162  290258 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:36:34.146188  290258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:36:34.146254  290258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256879
	I1101 09:36:34.174347  290258 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:36:34.174478  290258 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:36:34.174579  290258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256879
	I1101 09:36:34.189683  290258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/running-upgrade-256879/id_rsa Username:docker}
	I1101 09:36:34.199746  290258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/running-upgrade-256879/id_rsa Username:docker}
	I1101 09:36:34.271165  290258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:36:34.289806  290258 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:36:34.289907  290258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:36:34.306672  290258 api_server.go:72] duration metric: took 191.809528ms to wait for apiserver process to appear ...
	I1101 09:36:34.306702  290258 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:36:34.306732  290258 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:36:34.311542  290258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:36:34.315832  290258 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 09:36:34.317260  290258 api_server.go:141] control plane version: v1.28.3
	I1101 09:36:34.317334  290258 api_server.go:131] duration metric: took 10.622105ms to wait for apiserver health ...
	I1101 09:36:34.317361  290258 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:36:34.319618  290258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:36:34.321291  290258 system_pods.go:59] 5 kube-system pods found
	I1101 09:36:34.321327  290258 system_pods.go:61] "etcd-running-upgrade-256879" [87b1dcda-7b66-441c-93e2-d2e340ebe6a6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:36:34.321341  290258 system_pods.go:61] "kube-apiserver-running-upgrade-256879" [c70ecc6e-c575-4784-a0c9-df8e2fbfa6f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:36:34.321353  290258 system_pods.go:61] "kube-controller-manager-running-upgrade-256879" [e04e4cee-1956-4a70-b0bb-64839d792aae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:36:34.321363  290258 system_pods.go:61] "kube-scheduler-running-upgrade-256879" [6a18384a-9be5-4634-9f0d-a0655f8a77c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:36:34.321385  290258 system_pods.go:61] "storage-provisioner" [9bb4e582-366e-4284-bdb9-80994997ab1b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1101 09:36:34.321396  290258 system_pods.go:74] duration metric: took 4.014945ms to wait for pod list to return data ...
	I1101 09:36:34.321412  290258 kubeadm.go:587] duration metric: took 206.554721ms to wait for: map[apiserver:true system_pods:true]
	I1101 09:36:34.321432  290258 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:36:34.325003  290258 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:36:34.325089  290258 node_conditions.go:123] node cpu capacity is 8
	I1101 09:36:34.325157  290258 node_conditions.go:105] duration metric: took 3.718186ms to run NodePressure ...
	I1101 09:36:34.325183  290258 start.go:242] waiting for startup goroutines ...
	I1101 09:36:34.852103  290258 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:36:34.853704  290258 addons.go:515] duration metric: took 738.626131ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:36:34.853756  290258 start.go:247] waiting for cluster config update ...
	I1101 09:36:34.853773  290258 start.go:256] writing updated cluster config ...
	I1101 09:36:34.854142  290258 ssh_runner.go:195] Run: rm -f paused
	I1101 09:36:34.925711  290258 start.go:628] kubectl: 1.34.1, cluster: 1.28.3 (minor skew: 6)
	I1101 09:36:34.927622  290258 out.go:203] 
	W1101 09:36:34.928772  290258 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.3.
	I1101 09:36:34.930029  290258 out.go:179]   - Want kubectl v1.28.3? Try 'minikube kubectl -- get pods -A'
	I1101 09:36:34.931509  290258 out.go:179] * Done! kubectl is now configured to use "running-upgrade-256879" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.69003534Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.693987862Z" level=info msg="Conmon does support the --sync option"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.694020661Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.694041155Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.695132855Z" level=info msg="Conmon does support the --sync option"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.695252297Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.707395179Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.707617284Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.708340328Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.71203794Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.712299308Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.721085005Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.78952917Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-xdbjd Namespace:kube-system ID:032caa95d2f7bc2915f898b4bae63a612e721bd48af41ba878f23f074e357b97 UID:bed642d7-1538-4486-9390-dd23c039bed7 NetNS:/var/run/netns/8d26bbb9-ae4f-4f62-a06d-9bae49f75c9f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000a84118}] Aliases:map[]}"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.78970673Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-xdbjd for CNI network kindnet (type=ptp)"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790158304Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790271649Z" level=info msg="Starting seccomp notifier watcher"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790620065Z" level=info msg="Create NRI interface"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790837671Z" level=info msg="built-in NRI default validator is disabled"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790855193Z" level=info msg="runtime interface created"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790880596Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790887953Z" level=info msg="runtime interface starting up..."
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790895513Z" level=info msg="starting plugins..."
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790932616Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.791353786Z" level=info msg="No systemd watchdog enabled"
	Nov 01 09:36:28 pause-902975 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	71b8d1d6c2ee0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   032caa95d2f7b       coredns-66bc5c9577-xdbjd               kube-system
	dd8cedf62a3d2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   fc0c79fd51f2b       kindnet-rq66b                          kube-system
	1e42f625fe2d5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   5f98d7db27d7f       kube-proxy-hjsb7                       kube-system
	669fc4f74aec7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Running             kube-controller-manager   0                   6c6b8df556fd1       kube-controller-manager-pause-902975   kube-system
	61f8ff1cbeb42       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Running             kube-apiserver            0                   e0fc5bb597ee8       kube-apiserver-pause-902975            kube-system
	0f8ecf9e59d3a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Running             kube-scheduler            0                   eea1d19f3cb2f       kube-scheduler-pause-902975            kube-system
	287fc0673cdca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Running             etcd                      0                   bb9636a8fcd2b       etcd-pause-902975                      kube-system
	
	
	==> coredns [71b8d1d6c2ee08888694bb218f0ca907500db3e5f72ee39a90ecb0d21465f22d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43775 - 6217 "HINFO IN 8861103399179030914.5202394650079970877. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.111940903s
	
	
	==> describe nodes <==
	Name:               pause-902975
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-902975
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=pause-902975
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_36_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:36:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-902975
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:36:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:36:25 +0000   Sat, 01 Nov 2025 09:36:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:36:25 +0000   Sat, 01 Nov 2025 09:36:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:36:25 +0000   Sat, 01 Nov 2025 09:36:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:36:25 +0000   Sat, 01 Nov 2025 09:36:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-902975
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2cc41fd6-14ae-4855-b6b7-b665ab3cd675
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xdbjd                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-902975                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-rq66b                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-902975             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-902975    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-hjsb7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-902975             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node pause-902975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node pause-902975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node pause-902975 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node pause-902975 event: Registered Node pause-902975 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-902975 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 53 1e 0b f5 f9 08 06
	[ +20.616610] IPv4: martian source 10.244.0.1 from 10.244.0.54, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 5d 8b 4b c3 ca 08 06
	[Nov 1 08:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.063864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023900] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023945] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023903] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +2.047798] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[Nov 1 08:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +8.511341] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[ +16.382756] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[ +32.253538] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	
	
	==> etcd [287fc0673cdca8876c33403572eaa2834852ba185e0ca80e84089fe7470bf64b] <==
	{"level":"warn","ts":"2025-11-01T09:36:01.868748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.877592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.889093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.897875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.907948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.914737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.921705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.929743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.938251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.951056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.959625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.967765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.975826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.983193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.989777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.997490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.005032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.012173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.020213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.028946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.037389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.055162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.061955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.071128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.145187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42420","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:36 up  1:18,  0 user,  load average: 6.60, 2.59, 1.52
	Linux pause-902975 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dd8cedf62a3d2368c72d2d988319c5255e252ef0120a3aeb588100c9cf6eadd1] <==
	I1101 09:36:11.414021       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:36:11.414432       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:36:11.414573       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:36:11.414587       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:36:11.414600       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:36:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:36:11.620324       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:36:11.620517       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:36:11.620663       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:36:11.712848       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:36:12.115258       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:36:12.115291       1 metrics.go:72] Registering metrics
	I1101 09:36:12.115362       1 controller.go:711] "Syncing nftables rules"
	I1101 09:36:21.621051       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:36:21.621123       1 main.go:301] handling current node
	I1101 09:36:31.625125       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:36:31.625176       1 main.go:301] handling current node
	
	
	==> kube-apiserver [61f8ff1cbeb42879157dbc67a68f842c5f0ad25acbe02d4eb9d5c1babec228a7] <==
	I1101 09:36:02.692254       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:36:02.692261       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:36:02.692332       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:36:02.693774       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:36:02.696039       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:36:02.708566       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:36:02.719999       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:36:02.733462       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:36:03.601689       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:36:03.606621       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:36:03.606701       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:36:04.316961       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:36:04.367452       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:36:04.502156       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:36:04.511335       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 09:36:04.512567       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:36:04.517867       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:36:04.630983       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:36:05.339318       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:36:05.355935       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:36:05.365826       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:36:09.888384       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:36:09.895212       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:36:10.632896       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:36:10.733104       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [669fc4f74aec78463a493d4a46267a7870a5dccdab4cbdc8910f092fd3f54377] <==
	I1101 09:36:09.629089       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:36:09.629091       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:36:09.629365       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:36:09.629409       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:36:09.629648       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:36:09.630646       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:36:09.630865       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:36:09.630924       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:36:09.634027       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:36:09.636223       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:36:09.637364       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:36:09.639297       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:36:09.639605       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:36:09.639690       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:36:09.646408       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:36:09.665178       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:36:09.679035       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:36:09.679208       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:36:09.681387       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:36:09.681524       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:36:09.682245       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:36:09.682293       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:36:09.683678       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:36:09.695464       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:36:24.631510       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1e42f625fe2d5fc1f929ca2ebc8efab89b4da0503bf76e4280b90f200452e532] <==
	I1101 09:36:11.243053       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:36:11.310243       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:36:11.410506       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:36:11.410583       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:36:11.411050       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:36:11.452715       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:36:11.452950       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:36:11.459108       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:36:11.459587       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:36:11.459678       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:36:11.461395       1 config.go:309] "Starting node config controller"
	I1101 09:36:11.461465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:36:11.461495       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:36:11.461432       1 config.go:200] "Starting service config controller"
	I1101 09:36:11.461545       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:36:11.461716       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:36:11.461737       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:36:11.461848       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:36:11.461876       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:36:11.562806       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:36:11.562818       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:36:11.563276       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0f8ecf9e59d3a7c95293acc9cb3817e3e41a75c5fe78a517a7ffcd24e116a3a3] <==
	E1101 09:36:02.657042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:36:02.657048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:36:02.657070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:36:02.657098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:36:02.657100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:36:02.657235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:36:02.657330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:36:03.474955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:36:03.498161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:36:03.576144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:36:03.601023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:36:03.626452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:36:03.771313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:36:03.806688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:36:03.809392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:36:03.817508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:36:03.826070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:36:03.834003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:36:03.849330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:36:03.954535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:36:04.013702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:36:04.047216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:36:04.051937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:36:04.157851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:36:06.449510       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:36:06 pause-902975 kubelet[1315]: I1101 09:36:06.249357    1315 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:36:06 pause-902975 kubelet[1315]: I1101 09:36:06.296766    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-902975" podStartSLOduration=1.296724308 podStartE2EDuration="1.296724308s" podCreationTimestamp="2025-11-01 09:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:06.296715739 +0000 UTC m=+1.164582377" watchObservedRunningTime="2025-11-01 09:36:06.296724308 +0000 UTC m=+1.164590937"
	Nov 01 09:36:06 pause-902975 kubelet[1315]: I1101 09:36:06.321752    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-902975" podStartSLOduration=1.32172414 podStartE2EDuration="1.32172414s" podCreationTimestamp="2025-11-01 09:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:06.310131142 +0000 UTC m=+1.177997778" watchObservedRunningTime="2025-11-01 09:36:06.32172414 +0000 UTC m=+1.189590774"
	Nov 01 09:36:06 pause-902975 kubelet[1315]: I1101 09:36:06.322284    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-902975" podStartSLOduration=1.322261586 podStartE2EDuration="1.322261586s" podCreationTimestamp="2025-11-01 09:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:06.321623996 +0000 UTC m=+1.189490634" watchObservedRunningTime="2025-11-01 09:36:06.322261586 +0000 UTC m=+1.190128230"
	Nov 01 09:36:06 pause-902975 kubelet[1315]: I1101 09:36:06.342666    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-902975" podStartSLOduration=2.342643932 podStartE2EDuration="2.342643932s" podCreationTimestamp="2025-11-01 09:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:06.333307343 +0000 UTC m=+1.201173993" watchObservedRunningTime="2025-11-01 09:36:06.342643932 +0000 UTC m=+1.210510570"
	Nov 01 09:36:09 pause-902975 kubelet[1315]: I1101 09:36:09.631244    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:36:09 pause-902975 kubelet[1315]: I1101 09:36:09.632793    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793623    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4a249ae-598a-453b-9f91-dfbb001b87ed-lib-modules\") pod \"kube-proxy-hjsb7\" (UID: \"c4a249ae-598a-453b-9f91-dfbb001b87ed\") " pod="kube-system/kube-proxy-hjsb7"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793684    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr4t2\" (UniqueName: \"kubernetes.io/projected/c4a249ae-598a-453b-9f91-dfbb001b87ed-kube-api-access-xr4t2\") pod \"kube-proxy-hjsb7\" (UID: \"c4a249ae-598a-453b-9f91-dfbb001b87ed\") " pod="kube-system/kube-proxy-hjsb7"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793708    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/42726f22-7f43-4f8e-be0a-66bacaff1da1-cni-cfg\") pod \"kindnet-rq66b\" (UID: \"42726f22-7f43-4f8e-be0a-66bacaff1da1\") " pod="kube-system/kindnet-rq66b"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793732    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4a249ae-598a-453b-9f91-dfbb001b87ed-xtables-lock\") pod \"kube-proxy-hjsb7\" (UID: \"c4a249ae-598a-453b-9f91-dfbb001b87ed\") " pod="kube-system/kube-proxy-hjsb7"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793752    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb95p\" (UniqueName: \"kubernetes.io/projected/42726f22-7f43-4f8e-be0a-66bacaff1da1-kube-api-access-rb95p\") pod \"kindnet-rq66b\" (UID: \"42726f22-7f43-4f8e-be0a-66bacaff1da1\") " pod="kube-system/kindnet-rq66b"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793774    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c4a249ae-598a-453b-9f91-dfbb001b87ed-kube-proxy\") pod \"kube-proxy-hjsb7\" (UID: \"c4a249ae-598a-453b-9f91-dfbb001b87ed\") " pod="kube-system/kube-proxy-hjsb7"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793794    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42726f22-7f43-4f8e-be0a-66bacaff1da1-xtables-lock\") pod \"kindnet-rq66b\" (UID: \"42726f22-7f43-4f8e-be0a-66bacaff1da1\") " pod="kube-system/kindnet-rq66b"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793812    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42726f22-7f43-4f8e-be0a-66bacaff1da1-lib-modules\") pod \"kindnet-rq66b\" (UID: \"42726f22-7f43-4f8e-be0a-66bacaff1da1\") " pod="kube-system/kindnet-rq66b"
	Nov 01 09:36:11 pause-902975 kubelet[1315]: I1101 09:36:11.342813    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rq66b" podStartSLOduration=1.342789764 podStartE2EDuration="1.342789764s" podCreationTimestamp="2025-11-01 09:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:11.342255904 +0000 UTC m=+6.210122546" watchObservedRunningTime="2025-11-01 09:36:11.342789764 +0000 UTC m=+6.210656402"
	Nov 01 09:36:11 pause-902975 kubelet[1315]: I1101 09:36:11.356625    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hjsb7" podStartSLOduration=1.3566083 podStartE2EDuration="1.3566083s" podCreationTimestamp="2025-11-01 09:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:11.356081328 +0000 UTC m=+6.223947951" watchObservedRunningTime="2025-11-01 09:36:11.3566083 +0000 UTC m=+6.224474937"
	Nov 01 09:36:21 pause-902975 kubelet[1315]: I1101 09:36:21.714557    1315 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:36:21 pause-902975 kubelet[1315]: I1101 09:36:21.771941    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bed642d7-1538-4486-9390-dd23c039bed7-config-volume\") pod \"coredns-66bc5c9577-xdbjd\" (UID: \"bed642d7-1538-4486-9390-dd23c039bed7\") " pod="kube-system/coredns-66bc5c9577-xdbjd"
	Nov 01 09:36:21 pause-902975 kubelet[1315]: I1101 09:36:21.772009    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfd22\" (UniqueName: \"kubernetes.io/projected/bed642d7-1538-4486-9390-dd23c039bed7-kube-api-access-dfd22\") pod \"coredns-66bc5c9577-xdbjd\" (UID: \"bed642d7-1538-4486-9390-dd23c039bed7\") " pod="kube-system/coredns-66bc5c9577-xdbjd"
	Nov 01 09:36:22 pause-902975 kubelet[1315]: I1101 09:36:22.370025    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xdbjd" podStartSLOduration=12.37000153 podStartE2EDuration="12.37000153s" podCreationTimestamp="2025-11-01 09:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:22.369497193 +0000 UTC m=+17.237363831" watchObservedRunningTime="2025-11-01 09:36:22.37000153 +0000 UTC m=+17.237868169"
	Nov 01 09:36:32 pause-902975 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:36:32 pause-902975 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:36:32 pause-902975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:36:32 pause-902975 systemd[1]: kubelet.service: Consumed 1.247s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-902975 -n pause-902975
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-902975 -n pause-902975: exit status 2 (451.061101ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-902975 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-902975
helpers_test.go:243: (dbg) docker inspect pause-902975:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611",
	        "Created": "2025-11-01T09:35:45.180634233Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 282651,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:35:45.282012825Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611/hosts",
	        "LogPath": "/var/lib/docker/containers/d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611/d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611-json.log",
	        "Name": "/pause-902975",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-902975:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-902975",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d6d5d46e2a49735792b4a00e1c8049b60d481d0f5526d92f705d7c2543b83611",
	                "LowerDir": "/var/lib/docker/overlay2/dfb3470d38220d408b2ada854824354d3e5e96e16109271833ff8579b04d8308-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/dfb3470d38220d408b2ada854824354d3e5e96e16109271833ff8579b04d8308/merged",
	                "UpperDir": "/var/lib/docker/overlay2/dfb3470d38220d408b2ada854824354d3e5e96e16109271833ff8579b04d8308/diff",
	                "WorkDir": "/var/lib/docker/overlay2/dfb3470d38220d408b2ada854824354d3e5e96e16109271833ff8579b04d8308/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-902975",
	                "Source": "/var/lib/docker/volumes/pause-902975/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-902975",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-902975",
	                "name.minikube.sigs.k8s.io": "pause-902975",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58199ea573559524513512b94bd78ea8eba309d717a99673ed7319002de1d1fb",
	            "SandboxKey": "/var/run/docker/netns/58199ea57355",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32978"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32979"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32982"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32980"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32981"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-902975": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:ac:84:50:7c:b0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d401f3fc6132d2e5da7d1c5eb6ef560481e0d3d7a34dac33b55f9b7ab89d40f6",
	                    "EndpointID": "b701d3701d87d4b66bca2cff824b908599ac4e4fa5f2b4c3864e14d64f91c972",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-902975",
	                        "d6d5d46e2a49"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-902975 -n pause-902975
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-902975 -n pause-902975: exit status 2 (377.233029ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-902975 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-902975 logs -n 25: (2.160079286s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-307390 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo cri-dockerd --version                                                                                 │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo systemctl cat containerd --no-pager                                                                   │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo cat /etc/containerd/config.toml                                                                       │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo containerd config dump                                                                                │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo systemctl cat crio --no-pager                                                                         │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ ssh     │ -p cilium-307390 sudo crio config                                                                                           │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │                     │
	│ delete  │ -p cilium-307390                                                                                                            │ cilium-307390             │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │ 01 Nov 25 09:35 UTC │
	│ start   │ -p pause-902975 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-902975              │ jenkins │ v1.37.0 │ 01 Nov 25 09:35 UTC │ 01 Nov 25 09:36 UTC │
	│ delete  │ -p offline-crio-203516                                                                                                      │ offline-crio-203516       │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ stop    │ stopped-upgrade-228852 stop                                                                                                 │ stopped-upgrade-228852    │ jenkins │ v1.32.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ start   │ -p NoKubernetes-481344 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-481344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	│ start   │ -p NoKubernetes-481344 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-481344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ start   │ -p running-upgrade-256879 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ running-upgrade-256879    │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ start   │ -p stopped-upgrade-228852 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ stopped-upgrade-228852    │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ start   │ -p pause-902975 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-902975              │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ delete  │ -p stopped-upgrade-228852                                                                                                   │ stopped-upgrade-228852    │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │ 01 Nov 25 09:36 UTC │
	│ pause   │ -p pause-902975 --alsologtostderr -v=5                                                                                      │ pause-902975              │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	│ start   │ -p force-systemd-flag-281143 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-281143 │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	│ start   │ -p NoKubernetes-481344 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-481344       │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	│ delete  │ -p running-upgrade-256879                                                                                                   │ running-upgrade-256879    │ jenkins │ v1.37.0 │ 01 Nov 25 09:36 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:36:34
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:36:34.423193  298685 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:36:34.423666  298685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:36:34.423736  298685 out.go:374] Setting ErrFile to fd 2...
	I1101 09:36:34.423745  298685 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:36:34.425435  298685 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:36:34.426067  298685 out.go:368] Setting JSON to false
	I1101 09:36:34.428126  298685 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4732,"bootTime":1761985062,"procs":296,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:36:34.428377  298685 start.go:143] virtualization: kvm guest
	I1101 09:36:34.430948  298685 out.go:179] * [NoKubernetes-481344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:36:34.433093  298685 notify.go:221] Checking for updates...
	I1101 09:36:34.434982  298685 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:36:34.436431  298685 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:36:34.437826  298685 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:36:34.439279  298685 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:36:34.441002  298685 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:36:34.442457  298685 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:36:34.355290  298639 config.go:182] Loaded profile config "NoKubernetes-481344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:36:34.355865  298639 config.go:182] Loaded profile config "pause-902975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:36:34.356112  298639 config.go:182] Loaded profile config "running-upgrade-256879": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1101 09:36:34.356412  298639 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:36:34.395207  298639 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:36:34.395470  298639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:36:34.492781  298639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-01 09:36:34.480204724 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:36:34.492961  298639 docker.go:319] overlay module found
	I1101 09:36:34.496849  298639 out.go:179] * Using the docker driver based on user configuration
	I1101 09:36:34.444391  298685 config.go:182] Loaded profile config "NoKubernetes-481344": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:36:34.445288  298685 start.go:1904] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1101 09:36:34.445410  298685 start.go:1809] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1101 09:36:34.445447  298685 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:36:34.487965  298685 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:36:34.488071  298685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:36:34.618530  298685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:36:34.599041576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:36:34.618686  298685 docker.go:319] overlay module found
	I1101 09:36:34.498140  298639 start.go:309] selected driver: docker
	I1101 09:36:34.498163  298639 start.go:930] validating driver "docker" against <nil>
	I1101 09:36:34.498180  298639 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:36:34.498890  298639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:36:34.618928  298639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:36:34.599041576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:36:34.619720  298639 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:36:34.620255  298639 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 09:36:34.621223  298685 out.go:179] * Using the docker driver based on existing profile
	I1101 09:36:34.622102  298639 out.go:179] * Using Docker driver with root privileges
	I1101 09:36:34.623850  298639 cni.go:84] Creating CNI manager for ""
	I1101 09:36:34.623950  298639 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:36:34.623990  298639 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:36:34.624094  298639 start.go:353] cluster config:
	{Name:force-systemd-flag-281143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-281143 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:36:34.626707  298639 out.go:179] * Starting "force-systemd-flag-281143" primary control-plane node in "force-systemd-flag-281143" cluster
	I1101 09:36:34.631332  298639 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:36:34.632616  298639 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:36:34.633797  298639 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:36:34.633843  298639 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:36:34.633861  298639 cache.go:59] Caching tarball of preloaded images
	I1101 09:36:34.633888  298639 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:36:34.633999  298639 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:36:34.634013  298639 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:36:34.634149  298639 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/force-systemd-flag-281143/config.json ...
	I1101 09:36:34.634174  298639 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/force-systemd-flag-281143/config.json: {Name:mke43da51586a1a9a5e5259ed38555d7cae2a12d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:36:34.662623  298639 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:36:34.662672  298639 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:36:34.662691  298639 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:36:34.662751  298639 start.go:360] acquireMachinesLock for force-systemd-flag-281143: {Name:mk8130c024d31558218158c20093fcabff0fe379 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:36:34.663181  298639 start.go:364] duration metric: took 393.244µs to acquireMachinesLock for "force-systemd-flag-281143"
	I1101 09:36:34.663253  298639 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-281143 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-281143 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:36:34.663366  298639 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:36:34.622995  298685 start.go:309] selected driver: docker
	I1101 09:36:34.623457  298685 start.go:930] validating driver "docker" against &{Name:NoKubernetes-481344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-481344 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:36:34.623761  298685 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:36:34.625997  298685 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:36:34.733548  298685 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:36:34.715145929 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:36:34.733752  298685 start.go:1904] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1101 09:36:34.734026  298685 cni.go:84] Creating CNI manager for ""
	I1101 09:36:34.734113  298685 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:36:34.734132  298685 start.go:1904] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1101 09:36:34.734213  298685 start.go:353] cluster config:
	{Name:NoKubernetes-481344 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-481344 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: A
utoPauseInterval:1m0s}
	I1101 09:36:34.739403  298685 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-481344
	I1101 09:36:34.740577  298685 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:36:34.741655  298685 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:36:34.146162  290258 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:36:34.146188  290258 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:36:34.146254  290258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256879
	I1101 09:36:34.174347  290258 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:36:34.174478  290258 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:36:34.174579  290258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" running-upgrade-256879
	I1101 09:36:34.189683  290258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/running-upgrade-256879/id_rsa Username:docker}
	I1101 09:36:34.199746  290258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/running-upgrade-256879/id_rsa Username:docker}
	I1101 09:36:34.271165  290258 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:36:34.289806  290258 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:36:34.289907  290258 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:36:34.306672  290258 api_server.go:72] duration metric: took 191.809528ms to wait for apiserver process to appear ...
	I1101 09:36:34.306702  290258 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:36:34.306732  290258 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:36:34.311542  290258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:36:34.315832  290258 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 09:36:34.317260  290258 api_server.go:141] control plane version: v1.28.3
	I1101 09:36:34.317334  290258 api_server.go:131] duration metric: took 10.622105ms to wait for apiserver health ...
	I1101 09:36:34.317361  290258 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:36:34.319618  290258 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:36:34.321291  290258 system_pods.go:59] 5 kube-system pods found
	I1101 09:36:34.321327  290258 system_pods.go:61] "etcd-running-upgrade-256879" [87b1dcda-7b66-441c-93e2-d2e340ebe6a6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:36:34.321341  290258 system_pods.go:61] "kube-apiserver-running-upgrade-256879" [c70ecc6e-c575-4784-a0c9-df8e2fbfa6f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:36:34.321353  290258 system_pods.go:61] "kube-controller-manager-running-upgrade-256879" [e04e4cee-1956-4a70-b0bb-64839d792aae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:36:34.321363  290258 system_pods.go:61] "kube-scheduler-running-upgrade-256879" [6a18384a-9be5-4634-9f0d-a0655f8a77c4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:36:34.321385  290258 system_pods.go:61] "storage-provisioner" [9bb4e582-366e-4284-bdb9-80994997ab1b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..)
	I1101 09:36:34.321396  290258 system_pods.go:74] duration metric: took 4.014945ms to wait for pod list to return data ...
	I1101 09:36:34.321412  290258 kubeadm.go:587] duration metric: took 206.554721ms to wait for: map[apiserver:true system_pods:true]
	I1101 09:36:34.321432  290258 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:36:34.325003  290258 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:36:34.325089  290258 node_conditions.go:123] node cpu capacity is 8
	I1101 09:36:34.325157  290258 node_conditions.go:105] duration metric: took 3.718186ms to run NodePressure ...
	I1101 09:36:34.325183  290258 start.go:242] waiting for startup goroutines ...
	I1101 09:36:34.852103  290258 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:36:34.853704  290258 addons.go:515] duration metric: took 738.626131ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:36:34.853756  290258 start.go:247] waiting for cluster config update ...
	I1101 09:36:34.853773  290258 start.go:256] writing updated cluster config ...
	I1101 09:36:34.854142  290258 ssh_runner.go:195] Run: rm -f paused
	I1101 09:36:34.925711  290258 start.go:628] kubectl: 1.34.1, cluster: 1.28.3 (minor skew: 6)
	I1101 09:36:34.927622  290258 out.go:203] 
	W1101 09:36:34.928772  290258 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.3.
	I1101 09:36:34.930029  290258 out.go:179]   - Want kubectl v1.28.3? Try 'minikube kubectl -- get pods -A'
	I1101 09:36:34.931509  290258 out.go:179] * Done! kubectl is now configured to use "running-upgrade-256879" cluster and "default" namespace by default
	I1101 09:36:34.742786  298685 preload.go:183] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1101 09:36:34.742887  298685 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:36:34.780063  298685 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:36:34.780179  298685 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	W1101 09:36:35.842656  298685 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1101 09:36:36.031593  298685 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1101 09:36:36.031758  298685 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/NoKubernetes-481344/config.json ...
	I1101 09:36:36.032544  298685 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:36:36.032599  298685 start.go:360] acquireMachinesLock for NoKubernetes-481344: {Name:mk3433fea82117147cc849dd519d1a5bd6df4546 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:36:36.032705  298685 start.go:364] duration metric: took 58.388µs to acquireMachinesLock for "NoKubernetes-481344"
	I1101 09:36:36.032727  298685 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:36:36.032740  298685 fix.go:54] fixHost starting: 
	I1101 09:36:36.033072  298685 cli_runner.go:164] Run: docker container inspect NoKubernetes-481344 --format={{.State.Status}}
	I1101 09:36:36.054332  298685 fix.go:112] recreateIfNeeded on NoKubernetes-481344: state=Running err=<nil>
	W1101 09:36:36.054368  298685 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.69003534Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.693987862Z" level=info msg="Conmon does support the --sync option"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.694020661Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.694041155Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.695132855Z" level=info msg="Conmon does support the --sync option"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.695252297Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.707395179Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.707617284Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.708340328Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.71203794Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.712299308Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.721085005Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.78952917Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-xdbjd Namespace:kube-system ID:032caa95d2f7bc2915f898b4bae63a612e721bd48af41ba878f23f074e357b97 UID:bed642d7-1538-4486-9390-dd23c039bed7 NetNS:/var/run/netns/8d26bbb9-ae4f-4f62-a06d-9bae49f75c9f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000a84118}] Aliases:map[]}"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.78970673Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-xdbjd for CNI network kindnet (type=ptp)"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790158304Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790271649Z" level=info msg="Starting seccomp notifier watcher"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790620065Z" level=info msg="Create NRI interface"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790837671Z" level=info msg="built-in NRI default validator is disabled"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790855193Z" level=info msg="runtime interface created"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790880596Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790887953Z" level=info msg="runtime interface starting up..."
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790895513Z" level=info msg="starting plugins..."
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.790932616Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 01 09:36:28 pause-902975 crio[2170]: time="2025-11-01T09:36:28.791353786Z" level=info msg="No systemd watchdog enabled"
	Nov 01 09:36:28 pause-902975 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	71b8d1d6c2ee0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   0                   032caa95d2f7b       coredns-66bc5c9577-xdbjd               kube-system
	dd8cedf62a3d2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   fc0c79fd51f2b       kindnet-rq66b                          kube-system
	1e42f625fe2d5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   27 seconds ago      Running             kube-proxy                0                   5f98d7db27d7f       kube-proxy-hjsb7                       kube-system
	669fc4f74aec7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   38 seconds ago      Running             kube-controller-manager   0                   6c6b8df556fd1       kube-controller-manager-pause-902975   kube-system
	61f8ff1cbeb42       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   38 seconds ago      Running             kube-apiserver            0                   e0fc5bb597ee8       kube-apiserver-pause-902975            kube-system
	0f8ecf9e59d3a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   38 seconds ago      Running             kube-scheduler            0                   eea1d19f3cb2f       kube-scheduler-pause-902975            kube-system
	287fc0673cdca       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   38 seconds ago      Running             etcd                      0                   bb9636a8fcd2b       etcd-pause-902975                      kube-system
	
	
	==> coredns [71b8d1d6c2ee08888694bb218f0ca907500db3e5f72ee39a90ecb0d21465f22d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43775 - 6217 "HINFO IN 8861103399179030914.5202394650079970877. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.111940903s
	
	
	==> describe nodes <==
	Name:               pause-902975
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-902975
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=pause-902975
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_36_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:36:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-902975
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:36:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:36:25 +0000   Sat, 01 Nov 2025 09:36:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:36:25 +0000   Sat, 01 Nov 2025 09:36:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:36:25 +0000   Sat, 01 Nov 2025 09:36:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:36:25 +0000   Sat, 01 Nov 2025 09:36:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-902975
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2cc41fd6-14ae-4855-b6b7-b665ab3cd675
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-xdbjd                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-pause-902975                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-rq66b                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-pause-902975             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-pause-902975    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-hjsb7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-pause-902975             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 34s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s   kubelet          Node pause-902975 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s   kubelet          Node pause-902975 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s   kubelet          Node pause-902975 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s   node-controller  Node pause-902975 event: Registered Node pause-902975 in Controller
	  Normal  NodeReady                18s   kubelet          Node pause-902975 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 53 1e 0b f5 f9 08 06
	[ +20.616610] IPv4: martian source 10.244.0.1 from 10.244.0.54, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff a2 5d 8b 4b c3 ca 08 06
	[Nov 1 08:58] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.063864] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023900] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023945] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023887] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +1.023903] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +2.047798] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[Nov 1 08:59] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[  +8.511341] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[ +16.382756] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	[ +32.253538] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: d6 85 5f c8 7d 59 9e b7 1f 20 b0 2d 08 00
	
	
	==> etcd [287fc0673cdca8876c33403572eaa2834852ba185e0ca80e84089fe7470bf64b] <==
	{"level":"warn","ts":"2025-11-01T09:36:01.868748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.877592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.889093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.897875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.907948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.914737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.921705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.929743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.938251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.951056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.959625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.967765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.975826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.983193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.989777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:01.997490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.005032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.012173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.020213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.028946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.037389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.055162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.061955Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.071128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:36:02.145187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42420","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:36:39 up  1:18,  0 user,  load average: 6.80, 2.70, 1.56
	Linux pause-902975 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dd8cedf62a3d2368c72d2d988319c5255e252ef0120a3aeb588100c9cf6eadd1] <==
	I1101 09:36:11.414021       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:36:11.414432       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:36:11.414573       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:36:11.414587       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:36:11.414600       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:36:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:36:11.620324       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:36:11.620517       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:36:11.620663       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:36:11.712848       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:36:12.115258       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:36:12.115291       1 metrics.go:72] Registering metrics
	I1101 09:36:12.115362       1 controller.go:711] "Syncing nftables rules"
	I1101 09:36:21.621051       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:36:21.621123       1 main.go:301] handling current node
	I1101 09:36:31.625125       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:36:31.625176       1 main.go:301] handling current node
	
	
	==> kube-apiserver [61f8ff1cbeb42879157dbc67a68f842c5f0ad25acbe02d4eb9d5c1babec228a7] <==
	I1101 09:36:02.692254       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:36:02.692261       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:36:02.692332       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:36:02.693774       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:36:02.696039       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:36:02.708566       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:36:02.719999       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:36:02.733462       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:36:03.601689       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:36:03.606621       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:36:03.606701       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:36:04.316961       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:36:04.367452       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:36:04.502156       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:36:04.511335       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 09:36:04.512567       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:36:04.517867       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:36:04.630983       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:36:05.339318       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:36:05.355935       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:36:05.365826       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:36:09.888384       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:36:09.895212       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:36:10.632896       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:36:10.733104       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [669fc4f74aec78463a493d4a46267a7870a5dccdab4cbdc8910f092fd3f54377] <==
	I1101 09:36:09.629089       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:36:09.629091       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:36:09.629365       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:36:09.629409       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:36:09.629648       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:36:09.630646       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:36:09.630865       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:36:09.630924       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:36:09.634027       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:36:09.636223       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:36:09.637364       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:36:09.639297       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:36:09.639605       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:36:09.639690       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:36:09.646408       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:36:09.665178       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:36:09.679035       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:36:09.679208       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:36:09.681387       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:36:09.681524       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:36:09.682245       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:36:09.682293       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:36:09.683678       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:36:09.695464       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:36:24.631510       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1e42f625fe2d5fc1f929ca2ebc8efab89b4da0503bf76e4280b90f200452e532] <==
	I1101 09:36:11.243053       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:36:11.310243       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:36:11.410506       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:36:11.410583       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:36:11.411050       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:36:11.452715       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:36:11.452950       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:36:11.459108       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:36:11.459587       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:36:11.459678       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:36:11.461395       1 config.go:309] "Starting node config controller"
	I1101 09:36:11.461465       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:36:11.461495       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:36:11.461432       1 config.go:200] "Starting service config controller"
	I1101 09:36:11.461545       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:36:11.461716       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:36:11.461737       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:36:11.461848       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:36:11.461876       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:36:11.562806       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:36:11.562818       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:36:11.563276       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0f8ecf9e59d3a7c95293acc9cb3817e3e41a75c5fe78a517a7ffcd24e116a3a3] <==
	E1101 09:36:02.657042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:36:02.657048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:36:02.657070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:36:02.657098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:36:02.657100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:36:02.657235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:36:02.657330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:36:03.474955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:36:03.498161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:36:03.576144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:36:03.601023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:36:03.626452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:36:03.771313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:36:03.806688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:36:03.809392       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:36:03.817508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:36:03.826070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:36:03.834003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:36:03.849330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:36:03.954535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:36:04.013702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:36:04.047216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:36:04.051937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:36:04.157851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:36:06.449510       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:36:06 pause-902975 kubelet[1315]: I1101 09:36:06.249357    1315 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:36:06 pause-902975 kubelet[1315]: I1101 09:36:06.296766    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-902975" podStartSLOduration=1.296724308 podStartE2EDuration="1.296724308s" podCreationTimestamp="2025-11-01 09:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:06.296715739 +0000 UTC m=+1.164582377" watchObservedRunningTime="2025-11-01 09:36:06.296724308 +0000 UTC m=+1.164590937"
	Nov 01 09:36:06 pause-902975 kubelet[1315]: I1101 09:36:06.321752    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-902975" podStartSLOduration=1.32172414 podStartE2EDuration="1.32172414s" podCreationTimestamp="2025-11-01 09:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:06.310131142 +0000 UTC m=+1.177997778" watchObservedRunningTime="2025-11-01 09:36:06.32172414 +0000 UTC m=+1.189590774"
	Nov 01 09:36:06 pause-902975 kubelet[1315]: I1101 09:36:06.322284    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-902975" podStartSLOduration=1.322261586 podStartE2EDuration="1.322261586s" podCreationTimestamp="2025-11-01 09:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:06.321623996 +0000 UTC m=+1.189490634" watchObservedRunningTime="2025-11-01 09:36:06.322261586 +0000 UTC m=+1.190128230"
	Nov 01 09:36:06 pause-902975 kubelet[1315]: I1101 09:36:06.342666    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-902975" podStartSLOduration=2.342643932 podStartE2EDuration="2.342643932s" podCreationTimestamp="2025-11-01 09:36:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:06.333307343 +0000 UTC m=+1.201173993" watchObservedRunningTime="2025-11-01 09:36:06.342643932 +0000 UTC m=+1.210510570"
	Nov 01 09:36:09 pause-902975 kubelet[1315]: I1101 09:36:09.631244    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:36:09 pause-902975 kubelet[1315]: I1101 09:36:09.632793    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793623    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c4a249ae-598a-453b-9f91-dfbb001b87ed-lib-modules\") pod \"kube-proxy-hjsb7\" (UID: \"c4a249ae-598a-453b-9f91-dfbb001b87ed\") " pod="kube-system/kube-proxy-hjsb7"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793684    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xr4t2\" (UniqueName: \"kubernetes.io/projected/c4a249ae-598a-453b-9f91-dfbb001b87ed-kube-api-access-xr4t2\") pod \"kube-proxy-hjsb7\" (UID: \"c4a249ae-598a-453b-9f91-dfbb001b87ed\") " pod="kube-system/kube-proxy-hjsb7"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793708    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/42726f22-7f43-4f8e-be0a-66bacaff1da1-cni-cfg\") pod \"kindnet-rq66b\" (UID: \"42726f22-7f43-4f8e-be0a-66bacaff1da1\") " pod="kube-system/kindnet-rq66b"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793732    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c4a249ae-598a-453b-9f91-dfbb001b87ed-xtables-lock\") pod \"kube-proxy-hjsb7\" (UID: \"c4a249ae-598a-453b-9f91-dfbb001b87ed\") " pod="kube-system/kube-proxy-hjsb7"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793752    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb95p\" (UniqueName: \"kubernetes.io/projected/42726f22-7f43-4f8e-be0a-66bacaff1da1-kube-api-access-rb95p\") pod \"kindnet-rq66b\" (UID: \"42726f22-7f43-4f8e-be0a-66bacaff1da1\") " pod="kube-system/kindnet-rq66b"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793774    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c4a249ae-598a-453b-9f91-dfbb001b87ed-kube-proxy\") pod \"kube-proxy-hjsb7\" (UID: \"c4a249ae-598a-453b-9f91-dfbb001b87ed\") " pod="kube-system/kube-proxy-hjsb7"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793794    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42726f22-7f43-4f8e-be0a-66bacaff1da1-xtables-lock\") pod \"kindnet-rq66b\" (UID: \"42726f22-7f43-4f8e-be0a-66bacaff1da1\") " pod="kube-system/kindnet-rq66b"
	Nov 01 09:36:10 pause-902975 kubelet[1315]: I1101 09:36:10.793812    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42726f22-7f43-4f8e-be0a-66bacaff1da1-lib-modules\") pod \"kindnet-rq66b\" (UID: \"42726f22-7f43-4f8e-be0a-66bacaff1da1\") " pod="kube-system/kindnet-rq66b"
	Nov 01 09:36:11 pause-902975 kubelet[1315]: I1101 09:36:11.342813    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-rq66b" podStartSLOduration=1.342789764 podStartE2EDuration="1.342789764s" podCreationTimestamp="2025-11-01 09:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:11.342255904 +0000 UTC m=+6.210122546" watchObservedRunningTime="2025-11-01 09:36:11.342789764 +0000 UTC m=+6.210656402"
	Nov 01 09:36:11 pause-902975 kubelet[1315]: I1101 09:36:11.356625    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hjsb7" podStartSLOduration=1.3566083 podStartE2EDuration="1.3566083s" podCreationTimestamp="2025-11-01 09:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:11.356081328 +0000 UTC m=+6.223947951" watchObservedRunningTime="2025-11-01 09:36:11.3566083 +0000 UTC m=+6.224474937"
	Nov 01 09:36:21 pause-902975 kubelet[1315]: I1101 09:36:21.714557    1315 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:36:21 pause-902975 kubelet[1315]: I1101 09:36:21.771941    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bed642d7-1538-4486-9390-dd23c039bed7-config-volume\") pod \"coredns-66bc5c9577-xdbjd\" (UID: \"bed642d7-1538-4486-9390-dd23c039bed7\") " pod="kube-system/coredns-66bc5c9577-xdbjd"
	Nov 01 09:36:21 pause-902975 kubelet[1315]: I1101 09:36:21.772009    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfd22\" (UniqueName: \"kubernetes.io/projected/bed642d7-1538-4486-9390-dd23c039bed7-kube-api-access-dfd22\") pod \"coredns-66bc5c9577-xdbjd\" (UID: \"bed642d7-1538-4486-9390-dd23c039bed7\") " pod="kube-system/coredns-66bc5c9577-xdbjd"
	Nov 01 09:36:22 pause-902975 kubelet[1315]: I1101 09:36:22.370025    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-xdbjd" podStartSLOduration=12.37000153 podStartE2EDuration="12.37000153s" podCreationTimestamp="2025-11-01 09:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:36:22.369497193 +0000 UTC m=+17.237363831" watchObservedRunningTime="2025-11-01 09:36:22.37000153 +0000 UTC m=+17.237868169"
	Nov 01 09:36:32 pause-902975 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:36:32 pause-902975 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:36:32 pause-902975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:36:32 pause-902975 systemd[1]: kubelet.service: Consumed 1.247s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-902975 -n pause-902975
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-902975 -n pause-902975: exit status 2 (546.929104ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-902975 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-106430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-106430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (306.998989ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:42:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-106430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-106430 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-106430 describe deploy/metrics-server -n kube-system: exit status 1 (72.091682ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-106430 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-106430
helpers_test.go:243: (dbg) docker inspect old-k8s-version-106430:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca",
	        "Created": "2025-11-01T09:41:36.12631196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 381982,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:41:36.200302983Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/hosts",
	        "LogPath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca-json.log",
	        "Name": "/old-k8s-version-106430",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-106430:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-106430",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca",
	                "LowerDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-106430",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-106430/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-106430",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-106430",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-106430",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d764c02debdc17ebca839e075a4d5bf419cc551d8f73bf0d8fb7b8f8d171d711",
	            "SandboxKey": "/var/run/docker/netns/d764c02debdc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-106430": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e2:9c:c0:cb:b4:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eae036c06ea144341078058874d7c650e992adb447b26734be766752bb055131",
	                    "EndpointID": "cf6aadb497d2bf0c7c96d2883920f8c2d09ed652d4c82cf9d6647d9d3ba4512d",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-106430",
	                        "7fdf9f94daa8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-106430 -n old-k8s-version-106430
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-106430 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-106430 logs -n 25: (2.465965646s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p custom-flannel-307390 sudo cat /etc/kubernetes/kubelet.conf                                                                                                           │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /var/lib/kubelet/config.yaml                                                                                                           │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status docker --all --full --no-pager                                                                                            │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat docker --no-pager                                                                                                            │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /etc/docker/daemon.json                                                                                                                │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo docker system info                                                                                                                         │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status cri-docker --all --full --no-pager                                                                                        │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat cri-docker --no-pager                                                                                                        │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                   │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cri-dockerd --version                                                                                                                      │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status containerd --all --full --no-pager                                                                                        │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat containerd --no-pager                                                                                                        │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /lib/systemd/system/containerd.service                                                                                                 │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /etc/containerd/config.toml                                                                                                            │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo containerd config dump                                                                                                                     │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status crio --all --full --no-pager                                                                                              │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat crio --no-pager                                                                                                              │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                    │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo crio config                                                                                                                                │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p custom-flannel-307390                                                                                                                                                 │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p disable-driver-mounts-309397                                                                                                                                          │ disable-driver-mounts-309397 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-106430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:42:28
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:42:28.021202  400655 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:42:28.021478  400655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:42:28.021489  400655 out.go:374] Setting ErrFile to fd 2...
	I1101 09:42:28.021493  400655 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:42:28.021727  400655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:42:28.022338  400655 out.go:368] Setting JSON to false
	I1101 09:42:28.023544  400655 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5086,"bootTime":1761985062,"procs":292,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:42:28.023642  400655 start.go:143] virtualization: kvm guest
	I1101 09:42:28.026290  400655 out.go:179] * [default-k8s-diff-port-927869] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:42:28.027763  400655 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:42:28.027829  400655 notify.go:221] Checking for updates...
	I1101 09:42:28.031610  400655 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:42:28.033037  400655 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:28.034535  400655 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:42:28.035881  400655 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:42:28.037218  400655 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:42:28.039178  400655 config.go:182] Loaded profile config "embed-certs-214580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:42:28.039327  400655 config.go:182] Loaded profile config "no-preload-224845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:42:28.039441  400655 config.go:182] Loaded profile config "old-k8s-version-106430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:42:28.039564  400655 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:42:28.064540  400655 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:42:28.064665  400655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:42:28.127736  400655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:42:28.115186479 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:42:28.127891  400655 docker.go:319] overlay module found
	I1101 09:42:28.129779  400655 out.go:179] * Using the docker driver based on user configuration
	I1101 09:42:26.811117  388438 out.go:252]   - Configuring RBAC rules ...
	I1101 09:42:26.811271  388438 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:42:26.814867  388438 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:42:26.821620  388438 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:42:26.824698  388438 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:42:26.827980  388438 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:42:26.832291  388438 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:42:27.168525  388438 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:42:27.588026  388438 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:42:28.170493  388438 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:42:28.171686  388438 kubeadm.go:319] 
	I1101 09:42:28.171809  388438 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:42:28.171834  388438 kubeadm.go:319] 
	I1101 09:42:28.171970  388438 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:42:28.171991  388438 kubeadm.go:319] 
	I1101 09:42:28.172034  388438 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:42:28.172119  388438 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:42:28.172191  388438 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:42:28.172201  388438 kubeadm.go:319] 
	I1101 09:42:28.172275  388438 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:42:28.172284  388438 kubeadm.go:319] 
	I1101 09:42:28.172349  388438 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:42:28.172359  388438 kubeadm.go:319] 
	I1101 09:42:28.172430  388438 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:42:28.172536  388438 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:42:28.172635  388438 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:42:28.172645  388438 kubeadm.go:319] 
	I1101 09:42:28.172758  388438 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:42:28.172864  388438 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:42:28.172874  388438 kubeadm.go:319] 
	I1101 09:42:28.172998  388438 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6iz263.3wvcnvjxl5sqcn6p \
	I1101 09:42:28.173143  388438 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 \
	I1101 09:42:28.173177  388438 kubeadm.go:319] 	--control-plane 
	I1101 09:42:28.173186  388438 kubeadm.go:319] 
	I1101 09:42:28.173302  388438 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:42:28.173312  388438 kubeadm.go:319] 
	I1101 09:42:28.173423  388438 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6iz263.3wvcnvjxl5sqcn6p \
	I1101 09:42:28.173568  388438 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 
	I1101 09:42:28.178110  388438 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:42:28.178250  388438 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:42:28.178285  388438 cni.go:84] Creating CNI manager for ""
	I1101 09:42:28.178296  388438 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:42:28.180505  388438 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1101 09:42:28.131129  400655 start.go:309] selected driver: docker
	I1101 09:42:28.131148  400655 start.go:930] validating driver "docker" against <nil>
	I1101 09:42:28.131164  400655 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:42:28.131881  400655 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:42:28.204586  400655 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:42:28.191723623 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:42:28.204862  400655 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 09:42:28.205210  400655 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:42:28.206959  400655 out.go:179] * Using Docker driver with root privileges
	I1101 09:42:28.208392  400655 cni.go:84] Creating CNI manager for ""
	I1101 09:42:28.208488  400655 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:42:28.208502  400655 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:42:28.208600  400655 start.go:353] cluster config:
	{Name:default-k8s-diff-port-927869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:42:28.210159  400655 out.go:179] * Starting "default-k8s-diff-port-927869" primary control-plane node in "default-k8s-diff-port-927869" cluster
	I1101 09:42:28.211310  400655 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:42:28.212488  400655 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:42:28.213662  400655 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:42:28.213706  400655 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:42:28.213720  400655 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:42:28.213740  400655 cache.go:59] Caching tarball of preloaded images
	I1101 09:42:28.213860  400655 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:42:28.213902  400655 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:42:28.214069  400655 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/config.json ...
	I1101 09:42:28.214102  400655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/config.json: {Name:mk94342665309a3bbc6e9e4760c91b9fbe92df31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:28.243484  400655 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:42:28.243515  400655 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:42:28.243536  400655 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:42:28.243587  400655 start.go:360] acquireMachinesLock for default-k8s-diff-port-927869: {Name:mk1d147ba61fa7b0d79d77d5ddb1fccc76bfa8fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:42:28.243712  400655 start.go:364] duration metric: took 101.67µs to acquireMachinesLock for "default-k8s-diff-port-927869"
	I1101 09:42:28.243748  400655 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-927869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:42:28.243845  400655 start.go:125] createHost starting for "" (driver="docker")
	I1101 09:42:28.184754  388438 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:42:28.191771  388438 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:42:28.191795  388438 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:42:28.207412  388438 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:42:28.504839  388438 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:42:28.505115  388438 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:28.505199  388438 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-224845 minikube.k8s.io/updated_at=2025_11_01T09_42_28_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=no-preload-224845 minikube.k8s.io/primary=true
	I1101 09:42:28.611402  388438 ops.go:34] apiserver oom_adj: -16
	I1101 09:42:28.611459  388438 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:29.112140  388438 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:29.611814  388438 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:30.112153  388438 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Nov 01 09:42:18 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:18.332400906Z" level=info msg="Starting container: 7bda302face1f587bc5cb140c033bbb129e8c05b7755ecece462befb5c04aa60" id=da4780f0-4904-46bc-94e4-784b0cc83e67 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:42:18 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:18.334864189Z" level=info msg="Started container" PID=2134 containerID=7bda302face1f587bc5cb140c033bbb129e8c05b7755ecece462befb5c04aa60 description=kube-system/coredns-5dd5756b68-xh2rf/coredns id=da4780f0-4904-46bc-94e4-784b0cc83e67 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2ca0e5bdb42a5db4b4c66da02438523d306955bf041cb02273aa8281486cf3be
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.021707565Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a208f0d6-5d39-46c8-aaa9-994588d765b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.021822525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.028714726Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fc4b16d0930083f40df94f57842422cae67e7ab6424eb6d06027c3129acd37a9 UID:34bda5ed-1800-4728-8a22-d00b1e7edd29 NetNS:/var/run/netns/302f813a-1b82-42e6-90f3-63b9b78573cd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128df8}] Aliases:map[]}"
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.028757708Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.041252003Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fc4b16d0930083f40df94f57842422cae67e7ab6424eb6d06027c3129acd37a9 UID:34bda5ed-1800-4728-8a22-d00b1e7edd29 NetNS:/var/run/netns/302f813a-1b82-42e6-90f3-63b9b78573cd Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128df8}] Aliases:map[]}"
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.041424363Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.043545801Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.044741212Z" level=info msg="Ran pod sandbox fc4b16d0930083f40df94f57842422cae67e7ab6424eb6d06027c3129acd37a9 with infra container: default/busybox/POD" id=a208f0d6-5d39-46c8-aaa9-994588d765b3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.046475761Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=96efab9e-70ad-40d2-a229-7a0719d538e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.046631379Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=96efab9e-70ad-40d2-a229-7a0719d538e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.046679122Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=96efab9e-70ad-40d2-a229-7a0719d538e5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.047367143Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=347ef09b-ed47-4209-8239-6b52f6ca4459 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:42:21 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:21.04961277Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:42:23 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:23.620643755Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=347ef09b-ed47-4209-8239-6b52f6ca4459 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:42:23 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:23.62488164Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6011d570-7bce-4652-9e76-cbfdceccc2dc name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:42:23 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:23.628236357Z" level=info msg="Creating container: default/busybox/busybox" id=abc8a592-4182-4a9a-be06-0c4634d111e2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:42:23 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:23.628384835Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:42:23 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:23.633057628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:42:23 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:23.633548824Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:42:23 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:23.681273189Z" level=info msg="Created container e481d7237394dc3548d37002012d7bbaad77f2b8ada05d8ac14dab8f4b4aa7b0: default/busybox/busybox" id=abc8a592-4182-4a9a-be06-0c4634d111e2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:42:23 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:23.682393201Z" level=info msg="Starting container: e481d7237394dc3548d37002012d7bbaad77f2b8ada05d8ac14dab8f4b4aa7b0" id=6bfb0482-3a08-4306-a91d-b98ec3ac5005 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:42:23 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:23.685553195Z" level=info msg="Started container" PID=2209 containerID=e481d7237394dc3548d37002012d7bbaad77f2b8ada05d8ac14dab8f4b4aa7b0 description=default/busybox/busybox id=6bfb0482-3a08-4306-a91d-b98ec3ac5005 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc4b16d0930083f40df94f57842422cae67e7ab6424eb6d06027c3129acd37a9
	Nov 01 09:42:29 old-k8s-version-106430 crio[782]: time="2025-11-01T09:42:29.831380871Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	e481d7237394d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   fc4b16d093008       busybox                                          default
	7bda302face1f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   2ca0e5bdb42a5       coredns-5dd5756b68-xh2rf                         kube-system
	7dc5442c478f6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   09872a02502d1       storage-provisioner                              kube-system
	fd7e022aeec77       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   89753e8f8cf9c       kindnet-5v6hn                                    kube-system
	79dd50310f29f       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   de659942b9241       kube-proxy-zqs8f                                 kube-system
	769e0eeb3112c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      46 seconds ago      Running             etcd                      0                   aabae61ffc46a       etcd-old-k8s-version-106430                      kube-system
	81fd0a678e4a9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      46 seconds ago      Running             kube-controller-manager   0                   a7c1c309233d8       kube-controller-manager-old-k8s-version-106430   kube-system
	f42ecbf2772b5       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      46 seconds ago      Running             kube-apiserver            0                   43b7a2737e784       kube-apiserver-old-k8s-version-106430            kube-system
	0959f36e9a264       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      46 seconds ago      Running             kube-scheduler            0                   c3d6471d1c39d       kube-scheduler-old-k8s-version-106430            kube-system
	
	
	==> coredns [7bda302face1f587bc5cb140c033bbb129e8c05b7755ecece462befb5c04aa60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59106 - 47301 "HINFO IN 6565538298160613162.6210916932252665351. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069839412s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-106430
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-106430
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=old-k8s-version-106430
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_41_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:41:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-106430
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:42:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:42:21 +0000   Sat, 01 Nov 2025 09:41:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:42:21 +0000   Sat, 01 Nov 2025 09:41:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:42:21 +0000   Sat, 01 Nov 2025 09:41:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:42:21 +0000   Sat, 01 Nov 2025 09:42:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-106430
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                576f40f7-444f-4b9e-a2cc-82322f1cc662
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-xh2rf                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29s
	  kube-system                 etcd-old-k8s-version-106430                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-5v6hn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-106430             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-106430    200m (2%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-zqs8f                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-106430             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 47s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  47s (x9 over 47s)  kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node old-k8s-version-106430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     47s (x7 over 47s)  kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-106430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-106430 event: Registered Node old-k8s-version-106430 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-106430 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [769e0eeb3112ca2eec5cceb8d34b20d3f67f3a03d6d420505da046bc706c08d3] <==
	{"level":"info","ts":"2025-11-01T09:41:46.202871Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-01T09:41:46.203029Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-01T09:41:46.203077Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-11-01T09:41:46.203097Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-11-01T09:41:46.203105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-01T09:41:46.203117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-11-01T09:41:46.203137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-01T09:41:46.204075Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:41:46.204882Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:41:46.204885Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-106430 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:41:46.204929Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:41:46.205156Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:41:46.205254Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:41:46.205248Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T09:41:46.205275Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:41:46.205306Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T09:41:46.206424Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-01T09:41:46.206502Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T09:42:08.294193Z","caller":"traceutil/trace.go:171","msg":"trace[2053378746] linearizableReadLoop","detail":"{readStateIndex:432; appliedIndex:431; }","duration":"107.549611ms","start":"2025-11-01T09:42:08.186623Z","end":"2025-11-01T09:42:08.294172Z","steps":["trace[2053378746] 'read index received'  (duration: 107.383551ms)","trace[2053378746] 'applied index is now lower than readState.Index'  (duration: 164.741µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:42:08.294303Z","caller":"traceutil/trace.go:171","msg":"trace[1088843052] transaction","detail":"{read_only:false; response_revision:420; number_of_response:1; }","duration":"110.837829ms","start":"2025-11-01T09:42:08.183437Z","end":"2025-11-01T09:42:08.294275Z","steps":["trace[1088843052] 'process raft request'  (duration: 110.586821ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:42:08.294401Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.705944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-01T09:42:08.294444Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"106.803183ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-106430\" ","response":"range_response_count:1 size:5711"}
	{"level":"info","ts":"2025-11-01T09:42:08.294463Z","caller":"traceutil/trace.go:171","msg":"trace[1073575038] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:420; }","duration":"107.866604ms","start":"2025-11-01T09:42:08.186585Z","end":"2025-11-01T09:42:08.294451Z","steps":["trace[1073575038] 'agreement among raft nodes before linearized reading'  (duration: 107.697762ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:42:08.294475Z","caller":"traceutil/trace.go:171","msg":"trace[823575384] range","detail":"{range_begin:/registry/minions/old-k8s-version-106430; range_end:; response_count:1; response_revision:420; }","duration":"106.843941ms","start":"2025-11-01T09:42:08.187622Z","end":"2025-11-01T09:42:08.294466Z","steps":["trace[823575384] 'agreement among raft nodes before linearized reading'  (duration: 106.76908ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:42:31.661136Z","caller":"traceutil/trace.go:171","msg":"trace[188972574] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"111.855247ms","start":"2025-11-01T09:42:31.549257Z","end":"2025-11-01T09:42:31.661112Z","steps":["trace[188972574] 'process raft request'  (duration: 111.723265ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:42:32 up  1:24,  0 user,  load average: 6.19, 4.54, 2.80
	Linux old-k8s-version-106430 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fd7e022aeec77f08b7f00237927ec010e4787d2c34071509ff79f733ff69d493] <==
	I1101 09:42:07.262772       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:42:07.263042       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:42:07.263218       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:42:07.263239       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:42:07.263263       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:42:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:42:07.465253       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:42:07.465484       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:42:07.465611       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:42:07.466024       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:42:07.861650       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:42:07.861683       1 metrics.go:72] Registering metrics
	I1101 09:42:07.861770       1 controller.go:711] "Syncing nftables rules"
	I1101 09:42:17.471193       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:42:17.471242       1 main.go:301] handling current node
	I1101 09:42:27.466047       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:42:27.466175       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f42ecbf2772b5250c440ecf1043f2698b9b1b73cb28183130585e20d394c4d5c] <==
	I1101 09:41:47.498032       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 09:41:47.498070       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 09:41:47.498267       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1101 09:41:47.498299       1 aggregator.go:166] initial CRD sync complete...
	I1101 09:41:47.498307       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 09:41:47.498313       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:41:47.498320       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:41:47.499330       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 09:41:47.504682       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 09:41:47.690058       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:41:48.404157       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:41:48.409057       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:41:48.409080       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:41:48.939707       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:41:48.982560       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:41:49.109388       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:41:49.116686       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1101 09:41:49.118105       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 09:41:49.124279       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:41:49.458744       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 09:41:50.406353       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 09:41:50.417294       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:41:50.429052       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1101 09:42:03.048163       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1101 09:42:03.197361       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [81fd0a678e4a931b4df52667c409a767a764a6a5f9a04448bf7587c77b69f30a] <==
	I1101 09:42:02.829631       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1101 09:42:02.831939       1 shared_informer.go:318] Caches are synced for disruption
	I1101 09:42:02.863675       1 shared_informer.go:318] Caches are synced for resource quota
	I1101 09:42:03.051678       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1101 09:42:03.186033       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:42:03.211728       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-zqs8f"
	I1101 09:42:03.213565       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-5v6hn"
	I1101 09:42:03.242972       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:42:03.243005       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 09:42:03.653207       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-qkp7s"
	I1101 09:42:03.661013       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xh2rf"
	I1101 09:42:03.691308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="639.665474ms"
	I1101 09:42:03.715572       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="24.196164ms"
	I1101 09:42:03.715808       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="124.008µs"
	I1101 09:42:04.249217       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1101 09:42:04.282969       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-qkp7s"
	I1101 09:42:04.321188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.986156ms"
	I1101 09:42:04.333694       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.432527ms"
	I1101 09:42:04.334718       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="105.093µs"
	I1101 09:42:17.945222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="135.336µs"
	I1101 09:42:17.974052       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="158.877µs"
	I1101 09:42:18.579437       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="129.098µs"
	I1101 09:42:18.614617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.207709ms"
	I1101 09:42:18.614767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="88.845µs"
	I1101 09:42:22.621497       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [79dd50310f29f7215f2ec567739d29d6a2c6c4943e385c65ec62cfa10e1b217f] <==
	I1101 09:42:04.296176       1 server_others.go:69] "Using iptables proxy"
	I1101 09:42:04.330119       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1101 09:42:04.397043       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:42:04.400559       1 server_others.go:152] "Using iptables Proxier"
	I1101 09:42:04.400605       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 09:42:04.400618       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 09:42:04.400673       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 09:42:04.400970       1 server.go:846] "Version info" version="v1.28.0"
	I1101 09:42:04.401365       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:42:04.402173       1 config.go:315] "Starting node config controller"
	I1101 09:42:04.402248       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 09:42:04.402607       1 config.go:188] "Starting service config controller"
	I1101 09:42:04.402633       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 09:42:04.402719       1 config.go:97] "Starting endpoint slice config controller"
	I1101 09:42:04.402736       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 09:42:04.502830       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 09:42:04.502877       1 shared_informer.go:318] Caches are synced for node config
	I1101 09:42:04.502848       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [0959f36e9a2640ad7ebe783fbfcf3944dd3ec6d01538187910c93d6a4f6b8329] <==
	W1101 09:41:47.465388       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 09:41:47.465414       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 09:41:48.328022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1101 09:41:48.328067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1101 09:41:48.454394       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1101 09:41:48.454434       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1101 09:41:48.454794       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1101 09:41:48.454824       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1101 09:41:48.459115       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1101 09:41:48.459215       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1101 09:41:48.477294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1101 09:41:48.477339       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1101 09:41:48.489831       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1101 09:41:48.489874       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1101 09:41:48.513356       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1101 09:41:48.513397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1101 09:41:48.565757       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1101 09:41:48.565788       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:41:48.580109       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1101 09:41:48.580153       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1101 09:41:48.615478       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1101 09:41:48.615508       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1101 09:41:48.741708       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1101 09:41:48.741747       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I1101 09:41:51.161905       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 01 09:42:03 old-k8s-version-106430 kubelet[1391]: I1101 09:42:03.235265    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/834c3d0a-03fc-480c-a4c6-9f010159b1f9-lib-modules\") pod \"kube-proxy-zqs8f\" (UID: \"834c3d0a-03fc-480c-a4c6-9f010159b1f9\") " pod="kube-system/kube-proxy-zqs8f"
	Nov 01 09:42:03 old-k8s-version-106430 kubelet[1391]: I1101 09:42:03.235344    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cvfg\" (UniqueName: \"kubernetes.io/projected/834c3d0a-03fc-480c-a4c6-9f010159b1f9-kube-api-access-4cvfg\") pod \"kube-proxy-zqs8f\" (UID: \"834c3d0a-03fc-480c-a4c6-9f010159b1f9\") " pod="kube-system/kube-proxy-zqs8f"
	Nov 01 09:42:03 old-k8s-version-106430 kubelet[1391]: I1101 09:42:03.235385    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/68338c9c-3108-4c9f-8fed-214858c90ef5-lib-modules\") pod \"kindnet-5v6hn\" (UID: \"68338c9c-3108-4c9f-8fed-214858c90ef5\") " pod="kube-system/kindnet-5v6hn"
	Nov 01 09:42:03 old-k8s-version-106430 kubelet[1391]: I1101 09:42:03.235441    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qr8ps\" (UniqueName: \"kubernetes.io/projected/68338c9c-3108-4c9f-8fed-214858c90ef5-kube-api-access-qr8ps\") pod \"kindnet-5v6hn\" (UID: \"68338c9c-3108-4c9f-8fed-214858c90ef5\") " pod="kube-system/kindnet-5v6hn"
	Nov 01 09:42:03 old-k8s-version-106430 kubelet[1391]: I1101 09:42:03.235507    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/68338c9c-3108-4c9f-8fed-214858c90ef5-xtables-lock\") pod \"kindnet-5v6hn\" (UID: \"68338c9c-3108-4c9f-8fed-214858c90ef5\") " pod="kube-system/kindnet-5v6hn"
	Nov 01 09:42:03 old-k8s-version-106430 kubelet[1391]: E1101 09:42:03.345057    1391 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 01 09:42:03 old-k8s-version-106430 kubelet[1391]: E1101 09:42:03.345111    1391 projected.go:198] Error preparing data for projected volume kube-api-access-4cvfg for pod kube-system/kube-proxy-zqs8f: configmap "kube-root-ca.crt" not found
	Nov 01 09:42:03 old-k8s-version-106430 kubelet[1391]: E1101 09:42:03.345070    1391 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 01 09:42:03 old-k8s-version-106430 kubelet[1391]: E1101 09:42:03.345198    1391 projected.go:198] Error preparing data for projected volume kube-api-access-qr8ps for pod kube-system/kindnet-5v6hn: configmap "kube-root-ca.crt" not found
	Nov 01 09:42:03 old-k8s-version-106430 kubelet[1391]: E1101 09:42:03.345212    1391 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/834c3d0a-03fc-480c-a4c6-9f010159b1f9-kube-api-access-4cvfg podName:834c3d0a-03fc-480c-a4c6-9f010159b1f9 nodeName:}" failed. No retries permitted until 2025-11-01 09:42:03.845173263 +0000 UTC m=+13.471411474 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4cvfg" (UniqueName: "kubernetes.io/projected/834c3d0a-03fc-480c-a4c6-9f010159b1f9-kube-api-access-4cvfg") pod "kube-proxy-zqs8f" (UID: "834c3d0a-03fc-480c-a4c6-9f010159b1f9") : configmap "kube-root-ca.crt" not found
	Nov 01 09:42:03 old-k8s-version-106430 kubelet[1391]: E1101 09:42:03.345244    1391 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/68338c9c-3108-4c9f-8fed-214858c90ef5-kube-api-access-qr8ps podName:68338c9c-3108-4c9f-8fed-214858c90ef5 nodeName:}" failed. No retries permitted until 2025-11-01 09:42:03.845231709 +0000 UTC m=+13.471469922 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qr8ps" (UniqueName: "kubernetes.io/projected/68338c9c-3108-4c9f-8fed-214858c90ef5-kube-api-access-qr8ps") pod "kindnet-5v6hn" (UID: "68338c9c-3108-4c9f-8fed-214858c90ef5") : configmap "kube-root-ca.crt" not found
	Nov 01 09:42:04 old-k8s-version-106430 kubelet[1391]: I1101 09:42:04.541239    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-zqs8f" podStartSLOduration=1.5411773850000001 podCreationTimestamp="2025-11-01 09:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:04.541168879 +0000 UTC m=+14.167407094" watchObservedRunningTime="2025-11-01 09:42:04.541177385 +0000 UTC m=+14.167415596"
	Nov 01 09:42:07 old-k8s-version-106430 kubelet[1391]: I1101 09:42:07.608337    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5v6hn" podStartSLOduration=1.820832481 podCreationTimestamp="2025-11-01 09:42:03 +0000 UTC" firstStartedPulling="2025-11-01 09:42:04.142757451 +0000 UTC m=+13.768995654" lastFinishedPulling="2025-11-01 09:42:06.930206772 +0000 UTC m=+16.556444981" observedRunningTime="2025-11-01 09:42:07.607971766 +0000 UTC m=+17.234209975" watchObservedRunningTime="2025-11-01 09:42:07.608281808 +0000 UTC m=+17.234520020"
	Nov 01 09:42:17 old-k8s-version-106430 kubelet[1391]: I1101 09:42:17.906160    1391 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 01 09:42:17 old-k8s-version-106430 kubelet[1391]: I1101 09:42:17.943429    1391 topology_manager.go:215] "Topology Admit Handler" podUID="b8fde0f9-bc13-41ca-9adc-2b0edc592938" podNamespace="kube-system" podName="storage-provisioner"
	Nov 01 09:42:17 old-k8s-version-106430 kubelet[1391]: I1101 09:42:17.945056    1391 topology_manager.go:215] "Topology Admit Handler" podUID="2dc48063-a93a-46c9-b6da-451a12b954c3" podNamespace="kube-system" podName="coredns-5dd5756b68-xh2rf"
	Nov 01 09:42:18 old-k8s-version-106430 kubelet[1391]: I1101 09:42:18.037261    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxbkw\" (UniqueName: \"kubernetes.io/projected/2dc48063-a93a-46c9-b6da-451a12b954c3-kube-api-access-vxbkw\") pod \"coredns-5dd5756b68-xh2rf\" (UID: \"2dc48063-a93a-46c9-b6da-451a12b954c3\") " pod="kube-system/coredns-5dd5756b68-xh2rf"
	Nov 01 09:42:18 old-k8s-version-106430 kubelet[1391]: I1101 09:42:18.037381    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b8fde0f9-bc13-41ca-9adc-2b0edc592938-tmp\") pod \"storage-provisioner\" (UID: \"b8fde0f9-bc13-41ca-9adc-2b0edc592938\") " pod="kube-system/storage-provisioner"
	Nov 01 09:42:18 old-k8s-version-106430 kubelet[1391]: I1101 09:42:18.037424    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvlb5\" (UniqueName: \"kubernetes.io/projected/b8fde0f9-bc13-41ca-9adc-2b0edc592938-kube-api-access-cvlb5\") pod \"storage-provisioner\" (UID: \"b8fde0f9-bc13-41ca-9adc-2b0edc592938\") " pod="kube-system/storage-provisioner"
	Nov 01 09:42:18 old-k8s-version-106430 kubelet[1391]: I1101 09:42:18.037524    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dc48063-a93a-46c9-b6da-451a12b954c3-config-volume\") pod \"coredns-5dd5756b68-xh2rf\" (UID: \"2dc48063-a93a-46c9-b6da-451a12b954c3\") " pod="kube-system/coredns-5dd5756b68-xh2rf"
	Nov 01 09:42:18 old-k8s-version-106430 kubelet[1391]: I1101 09:42:18.590770    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.590714115 podCreationTimestamp="2025-11-01 09:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:18.590661894 +0000 UTC m=+28.216900105" watchObservedRunningTime="2025-11-01 09:42:18.590714115 +0000 UTC m=+28.216952326"
	Nov 01 09:42:18 old-k8s-version-106430 kubelet[1391]: I1101 09:42:18.590860    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xh2rf" podStartSLOduration=15.590837947 podCreationTimestamp="2025-11-01 09:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:18.579570472 +0000 UTC m=+28.205808685" watchObservedRunningTime="2025-11-01 09:42:18.590837947 +0000 UTC m=+28.217076156"
	Nov 01 09:42:20 old-k8s-version-106430 kubelet[1391]: I1101 09:42:20.719831    1391 topology_manager.go:215] "Topology Admit Handler" podUID="34bda5ed-1800-4728-8a22-d00b1e7edd29" podNamespace="default" podName="busybox"
	Nov 01 09:42:20 old-k8s-version-106430 kubelet[1391]: I1101 09:42:20.757823    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh76p\" (UniqueName: \"kubernetes.io/projected/34bda5ed-1800-4728-8a22-d00b1e7edd29-kube-api-access-lh76p\") pod \"busybox\" (UID: \"34bda5ed-1800-4728-8a22-d00b1e7edd29\") " pod="default/busybox"
	Nov 01 09:42:24 old-k8s-version-106430 kubelet[1391]: I1101 09:42:24.597408    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.023264202 podCreationTimestamp="2025-11-01 09:42:20 +0000 UTC" firstStartedPulling="2025-11-01 09:42:21.046979045 +0000 UTC m=+30.673217249" lastFinishedPulling="2025-11-01 09:42:23.621076004 +0000 UTC m=+33.247314206" observedRunningTime="2025-11-01 09:42:24.59716902 +0000 UTC m=+34.223407232" watchObservedRunningTime="2025-11-01 09:42:24.597361159 +0000 UTC m=+34.223599371"
	
	
	==> storage-provisioner [7dc5442c478f6dbf40a69c52f862fd31fc53ababa0bdf7468484a8b38d73a623] <==
	I1101 09:42:18.345135       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:42:18.357337       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:42:18.357794       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 09:42:18.371307       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:42:18.371654       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-106430_2fc096cc-a2da-4782-895d-9daff9bb7542!
	I1101 09:42:18.371602       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d7c9887-d56d-4587-80ec-07ecbd12d0c2", APIVersion:"v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-106430_2fc096cc-a2da-4782-895d-9daff9bb7542 became leader
	I1101 09:42:18.472591       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-106430_2fc096cc-a2da-4782-895d-9daff9bb7542!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-106430 -n old-k8s-version-106430
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-106430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (4.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (276.103008ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:42:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-224845 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-224845 describe deploy/metrics-server -n kube-system: exit status 1 (71.209848ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-224845 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-224845
helpers_test.go:243: (dbg) docker inspect no-preload-224845:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8",
	        "Created": "2025-11-01T09:41:51.624954273Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 389064,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:41:51.673151275Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/hostname",
	        "HostsPath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/hosts",
	        "LogPath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8-json.log",
	        "Name": "/no-preload-224845",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-224845:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-224845",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8",
	                "LowerDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-224845",
	                "Source": "/var/lib/docker/volumes/no-preload-224845/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-224845",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-224845",
	                "name.minikube.sigs.k8s.io": "no-preload-224845",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "601408085ca507a055fc2b8b2cbc593627b79f608a6cf0ceab09dc8305632248",
	            "SandboxKey": "/var/run/docker/netns/601408085ca5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-224845": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a2:43:19:a2:ad:48",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fd9ea47f59972660007e0e7f49bc24269f3213f6370bb54b3108ffd5b79a05aa",
	                    "EndpointID": "30e2f757c6d7688e11109e8fcdb0835d6558aa0f6ff074e029ac1fd157c0f8ea",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-224845",
	                        "968b2e1f8788"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-224845 -n no-preload-224845
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-224845 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-224845 logs -n 25: (1.229617784s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat docker --no-pager                                                                                                                                                                                 │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /etc/docker/daemon.json                                                                                                                                                                                     │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo docker system info                                                                                                                                                                                              │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                        │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                  │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cri-dockerd --version                                                                                                                                                                                           │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo crio config                                                                                                                                                                                                     │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p custom-flannel-307390                                                                                                                                                                                                                      │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p disable-driver-mounts-309397                                                                                                                                                                                                               │ disable-driver-mounts-309397 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-106430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p old-k8s-version-106430 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-106430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:42:50
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:42:50.614027  406120 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:42:50.614344  406120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:42:50.614355  406120 out.go:374] Setting ErrFile to fd 2...
	I1101 09:42:50.614360  406120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:42:50.614709  406120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:42:50.615372  406120 out.go:368] Setting JSON to false
	I1101 09:42:50.616698  406120 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5109,"bootTime":1761985062,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:42:50.616802  406120 start.go:143] virtualization: kvm guest
	I1101 09:42:50.619836  406120 out.go:179] * [old-k8s-version-106430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:42:50.621872  406120 notify.go:221] Checking for updates...
	I1101 09:42:50.621888  406120 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:42:50.628674  406120 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:42:50.630225  406120 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:50.631472  406120 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:42:50.632961  406120 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:42:50.634435  406120 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:42:50.638045  406120 config.go:182] Loaded profile config "old-k8s-version-106430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:42:50.641595  406120 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 09:42:50.642850  406120 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:42:50.683326  406120 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:42:50.683460  406120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:42:50.783005  406120 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:42:50.759487501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:42:50.783150  406120 docker.go:319] overlay module found
	I1101 09:42:50.787488  406120 out.go:179] * Using the docker driver based on existing profile
	I1101 09:42:50.788572  406120 start.go:309] selected driver: docker
	I1101 09:42:50.788595  406120 start.go:930] validating driver "docker" against &{Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:42:50.788779  406120 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:42:50.789528  406120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:42:50.912241  406120 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:42:50.89413684 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:42:50.912615  406120 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:42:50.912665  406120 cni.go:84] Creating CNI manager for ""
	I1101 09:42:50.912718  406120 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:42:50.912765  406120 start.go:353] cluster config:
	{Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:42:50.914622  406120 out.go:179] * Starting "old-k8s-version-106430" primary control-plane node in "old-k8s-version-106430" cluster
	I1101 09:42:50.915954  406120 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:42:50.917379  406120 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:42:50.918514  406120 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:42:50.918573  406120 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:42:50.918590  406120 cache.go:59] Caching tarball of preloaded images
	I1101 09:42:50.918663  406120 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:42:50.918693  406120 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:42:50.918906  406120 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 09:42:50.919196  406120 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/config.json ...
	I1101 09:42:50.948106  406120 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:42:50.948138  406120 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:42:50.948154  406120 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:42:50.948189  406120 start.go:360] acquireMachinesLock for old-k8s-version-106430: {Name:mk47cab1e1fd681dae6862a843f54c2590f288ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:42:50.948282  406120 start.go:364] duration metric: took 39.062µs to acquireMachinesLock for "old-k8s-version-106430"
	I1101 09:42:50.948308  406120 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:42:50.948318  406120 fix.go:54] fixHost starting: 
	I1101 09:42:50.948612  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:50.972291  406120 fix.go:112] recreateIfNeeded on old-k8s-version-106430: state=Stopped err=<nil>
	W1101 09:42:50.972324  406120 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:42:50.531203  396593 addons.go:239] Setting addon default-storageclass=true in "embed-certs-214580"
	I1101 09:42:50.531229  396593 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:50.531249  396593 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:42:50.531260  396593 host.go:66] Checking if "embed-certs-214580" exists ...
	I1101 09:42:50.531310  396593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:42:50.533063  396593 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:42:50.568680  396593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:42:50.568816  396593 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:50.568869  396593 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:42:50.568974  396593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:42:50.601158  396593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:42:50.613732  396593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:42:50.676436  396593 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:42:50.697529  396593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:50.737333  396593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:50.882009  396593 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1101 09:42:50.886149  396593 node_ready.go:35] waiting up to 6m0s for node "embed-certs-214580" to be "Ready" ...
	I1101 09:42:51.161964  396593 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:42:51.163280  396593 addons.go:515] duration metric: took 670.456657ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:42:51.388894  396593 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-214580" context rescaled to 1 replicas
	I1101 09:42:49.376252  400655 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:42:49.383040  400655 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:42:49.383063  400655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:42:49.398858  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:42:49.653613  400655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:42:49.653808  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-927869 minikube.k8s.io/updated_at=2025_11_01T09_42_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=default-k8s-diff-port-927869 minikube.k8s.io/primary=true
	I1101 09:42:49.653892  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:49.665596  400655 ops.go:34] apiserver oom_adj: -16
	I1101 09:42:49.749820  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:50.250114  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:50.750585  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:51.250809  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:51.750006  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:52.250183  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:52.750028  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:53.250186  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:53.749999  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:53.822757  400655 kubeadm.go:1114] duration metric: took 4.168926091s to wait for elevateKubeSystemPrivileges
	I1101 09:42:53.822793  400655 kubeadm.go:403] duration metric: took 15.047661715s to StartCluster
	I1101 09:42:53.822817  400655 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:53.822903  400655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:53.824503  400655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:53.824773  400655 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:42:53.824788  400655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:42:53.824818  400655 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:42:53.824999  400655 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-927869"
	I1101 09:42:53.825027  400655 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-927869"
	I1101 09:42:53.825039  400655 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-927869"
	I1101 09:42:53.825063  400655 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:42:53.825088  400655 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-927869"
	I1101 09:42:53.825051  400655 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:42:53.825501  400655 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:42:53.825647  400655 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:42:53.828120  400655 out.go:179] * Verifying Kubernetes components...
	I1101 09:42:53.829949  400655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:53.849634  400655 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-927869"
	I1101 09:42:53.849672  400655 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:42:53.850090  400655 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:42:53.850960  400655 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:42:53.852691  400655 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:53.852716  400655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:42:53.852783  400655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:42:53.882229  400655 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:53.882257  400655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:42:53.882320  400655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:42:53.885053  400655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:42:53.907017  400655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:42:53.935846  400655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:42:53.988391  400655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:42:54.013317  400655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:54.045518  400655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:54.149470  400655 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 09:42:54.151155  400655 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-927869" to be "Ready" ...
	I1101 09:42:54.374035  400655 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:42:50.974210  406120 out.go:252] * Restarting existing docker container for "old-k8s-version-106430" ...
	I1101 09:42:50.974286  406120 cli_runner.go:164] Run: docker start old-k8s-version-106430
	I1101 09:42:51.290157  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:51.314807  406120 kic.go:430] container "old-k8s-version-106430" state is running.
	I1101 09:42:51.315254  406120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-106430
	I1101 09:42:51.341531  406120 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/config.json ...
	I1101 09:42:51.341904  406120 machine.go:94] provisionDockerMachine start ...
	I1101 09:42:51.342010  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:51.365591  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:51.365960  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:51.365981  406120 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:42:51.366590  406120 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47468->127.0.0.1:33108: read: connection reset by peer
	I1101 09:42:54.518255  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-106430
	
	I1101 09:42:54.518288  406120 ubuntu.go:182] provisioning hostname "old-k8s-version-106430"
	I1101 09:42:54.518353  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:54.539831  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:54.540106  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:54.540129  406120 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-106430 && echo "old-k8s-version-106430" | sudo tee /etc/hostname
	I1101 09:42:54.702026  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-106430
	
	I1101 09:42:54.702114  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:54.724817  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:54.725136  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:54.725167  406120 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-106430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-106430/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-106430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:42:54.876787  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:42:54.876817  406120 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:42:54.876844  406120 ubuntu.go:190] setting up certificates
	I1101 09:42:54.876853  406120 provision.go:84] configureAuth start
	I1101 09:42:54.876906  406120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-106430
	I1101 09:42:54.896638  406120 provision.go:143] copyHostCerts
	I1101 09:42:54.896701  406120 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:42:54.896718  406120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:42:54.896786  406120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:42:54.896893  406120 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:42:54.896901  406120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:42:54.896956  406120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:42:54.897025  406120 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:42:54.897034  406120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:42:54.897058  406120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:42:54.897110  406120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-106430 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-106430]
	I1101 09:42:54.980885  406120 provision.go:177] copyRemoteCerts
	I1101 09:42:54.980976  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:42:54.981016  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.002790  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.107988  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:42:55.129045  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 09:42:55.148507  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:42:55.168604  406120 provision.go:87] duration metric: took 291.735137ms to configureAuth
	I1101 09:42:55.168634  406120 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:42:55.168849  406120 config.go:182] Loaded profile config "old-k8s-version-106430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:42:55.169027  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.187704  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:55.187966  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:55.187993  406120 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:42:55.497800  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:42:55.497831  406120 machine.go:97] duration metric: took 4.155886646s to provisionDockerMachine
	I1101 09:42:55.497846  406120 start.go:293] postStartSetup for "old-k8s-version-106430" (driver="docker")
	I1101 09:42:55.497860  406120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:42:55.497949  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:42:55.498013  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.519255  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.622520  406120 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:42:55.626564  406120 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:42:55.626626  406120 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:42:55.626647  406120 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:42:55.626715  406120 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:42:55.626812  406120 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:42:55.626948  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:42:55.635496  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:42:55.657654  406120 start.go:296] duration metric: took 159.790682ms for postStartSetup
	I1101 09:42:55.657758  406120 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:42:55.657821  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.676825  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.778028  406120 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:42:55.783383  406120 fix.go:56] duration metric: took 4.835054698s for fixHost
	I1101 09:42:55.783417  406120 start.go:83] releasing machines lock for "old-k8s-version-106430", held for 4.83512021s
	I1101 09:42:55.783495  406120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-106430
	I1101 09:42:55.804416  406120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:42:55.804456  406120 ssh_runner.go:195] Run: cat /version.json
	I1101 09:42:55.804492  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.804505  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.824353  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.824865  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.981055  406120 ssh_runner.go:195] Run: systemctl --version
	I1101 09:42:55.988383  406120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:42:56.025779  406120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:42:56.031204  406120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:42:56.031292  406120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:42:56.040425  406120 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:42:56.040454  406120 start.go:496] detecting cgroup driver to use...
	I1101 09:42:56.040493  406120 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:42:56.040550  406120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:42:56.056165  406120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:42:56.071243  406120 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:42:56.071318  406120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:42:56.087584  406120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:42:56.102101  406120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:42:56.185386  406120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:42:56.270398  406120 docker.go:234] disabling docker service ...
	I1101 09:42:56.270483  406120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:42:56.287689  406120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:42:56.302743  406120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:42:56.390775  406120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:42:56.477451  406120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:42:56.490747  406120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:42:56.507214  406120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 09:42:56.507281  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.518382  406120 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:42:56.518457  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.527846  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.539349  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.548816  406120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:42:56.557406  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.567380  406120 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.576904  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.586527  406120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:42:56.594509  406120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:42:56.602525  406120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:56.689010  406120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:42:56.807304  406120 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:42:56.807374  406120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:42:56.811774  406120 start.go:564] Will wait 60s for crictl version
	I1101 09:42:56.811826  406120 ssh_runner.go:195] Run: which crictl
	I1101 09:42:56.815686  406120 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:42:56.841111  406120 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:42:56.841202  406120 ssh_runner.go:195] Run: crio --version
	I1101 09:42:56.870245  406120 ssh_runner.go:195] Run: crio --version
	I1101 09:42:56.903409  406120 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	W1101 09:42:52.889922  396593 node_ready.go:57] node "embed-certs-214580" has "Ready":"False" status (will retry)
	W1101 09:42:54.890128  396593 node_ready.go:57] node "embed-certs-214580" has "Ready":"False" status (will retry)
	W1101 09:42:57.390285  396593 node_ready.go:57] node "embed-certs-214580" has "Ready":"False" status (will retry)
	I1101 09:42:56.904675  406120 cli_runner.go:164] Run: docker network inspect old-k8s-version-106430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:42:56.922956  406120 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 09:42:56.927507  406120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:42:56.938255  406120 kubeadm.go:884] updating cluster {Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:42:56.938367  406120 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:42:56.938406  406120 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:42:56.972069  406120 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:42:56.972094  406120 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:42:56.972148  406120 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:42:57.002691  406120 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:42:57.002716  406120 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:42:57.002725  406120 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1101 09:42:57.002856  406120 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-106430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:42:57.002967  406120 ssh_runner.go:195] Run: crio config
	I1101 09:42:57.051562  406120 cni.go:84] Creating CNI manager for ""
	I1101 09:42:57.051580  406120 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:42:57.051594  406120 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:42:57.051624  406120 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-106430 NodeName:old-k8s-version-106430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:42:57.051795  406120 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-106430"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:42:57.051865  406120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 09:42:57.060477  406120 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:42:57.060538  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:42:57.069511  406120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1101 09:42:57.083613  406120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:42:57.097812  406120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1101 09:42:57.111580  406120 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:42:57.115488  406120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:42:57.126011  406120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:57.213189  406120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:42:57.238996  406120 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430 for IP: 192.168.103.2
	I1101 09:42:57.239022  406120 certs.go:195] generating shared ca certs ...
	I1101 09:42:57.239045  406120 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:57.239236  406120 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:42:57.239286  406120 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:42:57.239299  406120 certs.go:257] generating profile certs ...
	I1101 09:42:57.239410  406120 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/client.key
	I1101 09:42:57.239470  406120 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/apiserver.key.08895b71
	I1101 09:42:57.239520  406120 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/proxy-client.key
	I1101 09:42:57.239670  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:42:57.239711  406120 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:42:57.239721  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:42:57.239755  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:42:57.239792  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:42:57.239816  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:42:57.239872  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:42:57.240646  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:42:57.261275  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:42:57.280725  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:42:57.302620  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:42:57.324849  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:42:57.346130  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:42:57.364807  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:42:57.382889  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:42:57.401595  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:42:57.420604  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:42:57.440611  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:42:57.460759  406120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:42:57.475075  406120 ssh_runner.go:195] Run: openssl version
	I1101 09:42:57.482420  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:42:57.491762  406120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:42:57.496929  406120 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:42:57.497002  406120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:42:57.536750  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:42:57.545765  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:42:57.554820  406120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:42:57.559339  406120 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:42:57.559405  406120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:42:57.598430  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:42:57.607527  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:42:57.616648  406120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:42:57.620647  406120 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:42:57.620708  406120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:42:57.659548  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:42:57.668681  406120 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:42:57.672696  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:42:57.708768  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:42:57.747304  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:42:57.796024  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:42:57.836290  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:42:57.889574  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:42:57.936314  406120 kubeadm.go:401] StartCluster: {Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:42:57.936438  406120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:42:57.936498  406120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:42:57.970374  406120 cri.go:89] found id: "67383aa07ea5a571b5780306e02b652d4100444e7d3375f13add5b076ff05a91"
	I1101 09:42:57.970398  406120 cri.go:89] found id: "21c9e16bfcb6f8965fbdbbf8b9f68b535b2252e3a9d58fe71811900f43d0178a"
	I1101 09:42:57.970403  406120 cri.go:89] found id: "227f629919dddfb2b5ef168af9cb9b28faa37ce01740e96b97f11cdff132e1a4"
	I1101 09:42:57.970408  406120 cri.go:89] found id: "2879f0fdda15ae5930efa2d324aedc5144c2f63543dc974f06fa3e3168b46588"
	I1101 09:42:57.970412  406120 cri.go:89] found id: ""
	I1101 09:42:57.970460  406120 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:42:57.984075  406120 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:42:57Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:42:57.984151  406120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:42:57.993097  406120 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:42:57.993119  406120 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:42:57.993172  406120 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:42:58.001723  406120 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:42:58.003096  406120 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-106430" does not appear in /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:58.004036  406120 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-104443/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-106430" cluster setting kubeconfig missing "old-k8s-version-106430" context setting]
	I1101 09:42:58.005461  406120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:58.007975  406120 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:42:58.016714  406120 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1101 09:42:58.016755  406120 kubeadm.go:602] duration metric: took 23.628873ms to restartPrimaryControlPlane
	I1101 09:42:58.016767  406120 kubeadm.go:403] duration metric: took 80.466912ms to StartCluster
	I1101 09:42:58.016787  406120 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:58.016859  406120 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:58.019146  406120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:58.019406  406120 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:42:54.375638  400655 addons.go:515] duration metric: took 550.809564ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:42:54.654510  400655 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-927869" context rescaled to 1 replicas
	W1101 09:42:56.154636  400655 node_ready.go:57] node "default-k8s-diff-port-927869" has "Ready":"False" status (will retry)
	I1101 09:42:58.019624  406120 config.go:182] Loaded profile config "old-k8s-version-106430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:42:58.019516  406120 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:42:58.019681  406120 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-106430"
	I1101 09:42:58.019692  406120 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-106430"
	W1101 09:42:58.019698  406120 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:42:58.019725  406120 host.go:66] Checking if "old-k8s-version-106430" exists ...
	I1101 09:42:58.019740  406120 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-106430"
	I1101 09:42:58.019723  406120 addons.go:70] Setting dashboard=true in profile "old-k8s-version-106430"
	I1101 09:42:58.019762  406120 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-106430"
	I1101 09:42:58.019781  406120 addons.go:239] Setting addon dashboard=true in "old-k8s-version-106430"
	W1101 09:42:58.019793  406120 addons.go:248] addon dashboard should already be in state true
	I1101 09:42:58.019834  406120 host.go:66] Checking if "old-k8s-version-106430" exists ...
	I1101 09:42:58.020108  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.020244  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.020320  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.022567  406120 out.go:179] * Verifying Kubernetes components...
	I1101 09:42:58.024092  406120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:58.047043  406120 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-106430"
	W1101 09:42:58.047078  406120 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:42:58.047127  406120 host.go:66] Checking if "old-k8s-version-106430" exists ...
	I1101 09:42:58.047350  406120 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:42:58.048017  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.048837  406120 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:58.048858  406120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:42:58.048940  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:58.051652  406120 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:42:58.052846  406120 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Nov 01 09:42:47 no-preload-224845 crio[775]: time="2025-11-01T09:42:47.379542419Z" level=info msg="Starting container: 04897359bd90d1139efc63542f3f9f1112cc2f7997d65f0e367cff79a31b5dae" id=cc761464-4693-4664-8c81-88bdac92317e name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:42:47 no-preload-224845 crio[775]: time="2025-11-01T09:42:47.38207393Z" level=info msg="Started container" PID=2890 containerID=04897359bd90d1139efc63542f3f9f1112cc2f7997d65f0e367cff79a31b5dae description=kube-system/coredns-66bc5c9577-8qn69/coredns id=cc761464-4693-4664-8c81-88bdac92317e name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4e7fd7256ecd37b723289c9b82b7697edf264dbc684d61aae2b875cbb0414b3
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.836122179Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7faee76f-b0d6-4d43-b1c1-09cc3688c721 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.836246132Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.841589666Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:af3b533420d34bf9e31ff246d04fac0d72cbc6ba4b21de2165fafd4f91369923 UID:9e4e4413-d3b7-4a5f-b088-241e94f310a4 NetNS:/var/run/netns/39afc302-755e-4e70-9a7a-9b3dd809b2e9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00046ef00}] Aliases:map[]}"
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.841645983Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.852412112Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:af3b533420d34bf9e31ff246d04fac0d72cbc6ba4b21de2165fafd4f91369923 UID:9e4e4413-d3b7-4a5f-b088-241e94f310a4 NetNS:/var/run/netns/39afc302-755e-4e70-9a7a-9b3dd809b2e9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00046ef00}] Aliases:map[]}"
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.85254388Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.853594418Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.85494319Z" level=info msg="Ran pod sandbox af3b533420d34bf9e31ff246d04fac0d72cbc6ba4b21de2165fafd4f91369923 with infra container: default/busybox/POD" id=7faee76f-b0d6-4d43-b1c1-09cc3688c721 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.856545053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ba8676ad-23ba-4fb1-a2b7-d87c004c480c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.856741472Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ba8676ad-23ba-4fb1-a2b7-d87c004c480c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.856810101Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ba8676ad-23ba-4fb1-a2b7-d87c004c480c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.857536126Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=93ce5115-0c4f-42ef-8488-ec1fc2177437 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:42:49 no-preload-224845 crio[775]: time="2025-11-01T09:42:49.861785941Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:42:52 no-preload-224845 crio[775]: time="2025-11-01T09:42:52.017011938Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=93ce5115-0c4f-42ef-8488-ec1fc2177437 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:42:52 no-preload-224845 crio[775]: time="2025-11-01T09:42:52.0177068Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6716a79c-3dc3-4d4e-b189-639028c82042 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:42:52 no-preload-224845 crio[775]: time="2025-11-01T09:42:52.019140968Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e2d20cb3-ed16-411e-b665-16f1613a3211 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:42:52 no-preload-224845 crio[775]: time="2025-11-01T09:42:52.023225043Z" level=info msg="Creating container: default/busybox/busybox" id=fd3e8f28-e736-4011-a799-daf02a07918f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:42:52 no-preload-224845 crio[775]: time="2025-11-01T09:42:52.023365137Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:42:52 no-preload-224845 crio[775]: time="2025-11-01T09:42:52.026871349Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:42:52 no-preload-224845 crio[775]: time="2025-11-01T09:42:52.027452959Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:42:52 no-preload-224845 crio[775]: time="2025-11-01T09:42:52.062279325Z" level=info msg="Created container b08ee2c39c5d2b9d8ec7b5d52aaf6c049809cdda5e9fdc091e5ce3ec200b4218: default/busybox/busybox" id=fd3e8f28-e736-4011-a799-daf02a07918f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:42:52 no-preload-224845 crio[775]: time="2025-11-01T09:42:52.063002444Z" level=info msg="Starting container: b08ee2c39c5d2b9d8ec7b5d52aaf6c049809cdda5e9fdc091e5ce3ec200b4218" id=e6f348dc-0c5d-41ac-b443-4647f2917d63 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:42:52 no-preload-224845 crio[775]: time="2025-11-01T09:42:52.064743557Z" level=info msg="Started container" PID=2965 containerID=b08ee2c39c5d2b9d8ec7b5d52aaf6c049809cdda5e9fdc091e5ce3ec200b4218 description=default/busybox/busybox id=e6f348dc-0c5d-41ac-b443-4647f2917d63 name=/runtime.v1.RuntimeService/StartContainer sandboxID=af3b533420d34bf9e31ff246d04fac0d72cbc6ba4b21de2165fafd4f91369923
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b08ee2c39c5d2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   af3b533420d34       busybox                                     default
	04897359bd90d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   a4e7fd7256ecd       coredns-66bc5c9577-8qn69                    kube-system
	a9505db735397       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   372df6ae2c330       storage-provisioner                         kube-system
	ea6ba18480f15       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   9c82e92fd96b5       kindnet-24485                               kube-system
	c71e3ecc042a9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      27 seconds ago      Running             kube-proxy                0                   31daeb29cfed1       kube-proxy-f2f64                            kube-system
	078cba9eb5b47       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      38 seconds ago      Running             kube-apiserver            0                   198d34990641a       kube-apiserver-no-preload-224845            kube-system
	a52620af472c0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      38 seconds ago      Running             etcd                      0                   9c0a599d6a803       etcd-no-preload-224845                      kube-system
	cc205c134a279       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      38 seconds ago      Running             kube-scheduler            0                   ed07533d09d76       kube-scheduler-no-preload-224845            kube-system
	33f07d675b754       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      38 seconds ago      Running             kube-controller-manager   0                   fc715760d94e5       kube-controller-manager-no-preload-224845   kube-system
	
	
	==> coredns [04897359bd90d1139efc63542f3f9f1112cc2f7997d65f0e367cff79a31b5dae] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39963 - 50746 "HINFO IN 9111639031572051525.6025270480074927062. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022042801s
	
	
	==> describe nodes <==
	Name:               no-preload-224845
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-224845
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=no-preload-224845
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_42_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:42:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-224845
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:42:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:42:58 +0000   Sat, 01 Nov 2025 09:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:42:58 +0000   Sat, 01 Nov 2025 09:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:42:58 +0000   Sat, 01 Nov 2025 09:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:42:58 +0000   Sat, 01 Nov 2025 09:42:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-224845
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                3cec4f20-c471-4766-a85c-05fa10e538f8
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-8qn69                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-224845                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-24485                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-224845             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-224845    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-f2f64                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-224845             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node no-preload-224845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node no-preload-224845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node no-preload-224845 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node no-preload-224845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node no-preload-224845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet          Node no-preload-224845 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node no-preload-224845 event: Registered Node no-preload-224845 in Controller
	  Normal  NodeReady                15s                kubelet          Node no-preload-224845 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [a52620af472c0468bf0801400898541d93f3e859b70eeba424dc6b0abe3b8959] <==
	{"level":"warn","ts":"2025-11-01T09:42:24.269392Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:24.285288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:24.303209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:24.311973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:24.327557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:24.338932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:24.345087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:24.410174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:31.228489Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.724206ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/statefulset-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:42:31.228612Z","caller":"traceutil/trace.go:172","msg":"trace[174754123] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/statefulset-controller; range_end:; response_count:0; response_revision:327; }","duration":"100.860131ms","start":"2025-11-01T09:42:31.127729Z","end":"2025-11-01T09:42:31.228589Z","steps":["trace[174754123] 'range keys from in-memory index tree'  (duration: 100.538397ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:42:32.144701Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.965806ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" limit:1 ","response":"range_response_count:1 size:234"}
	{"level":"info","ts":"2025-11-01T09:42:32.144744Z","caller":"traceutil/trace.go:172","msg":"trace[2008058571] transaction","detail":"{read_only:false; response_revision:332; number_of_response:1; }","duration":"112.267979ms","start":"2025-11-01T09:42:32.032449Z","end":"2025-11-01T09:42:32.144717Z","steps":["trace[2008058571] 'process raft request'  (duration: 84.306324ms)","trace[2008058571] 'compare'  (duration: 27.833154ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:42:32.144777Z","caller":"traceutil/trace.go:172","msg":"trace[918620188] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:331; }","duration":"110.069042ms","start":"2025-11-01T09:42:32.034696Z","end":"2025-11-01T09:42:32.144765Z","steps":["trace[918620188] 'agreement among raft nodes before linearized reading'  (duration: 82.009593ms)","trace[918620188] 'range keys from in-memory index tree'  (duration: 27.820577ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:42:32.244875Z","caller":"traceutil/trace.go:172","msg":"trace[503508096] linearizableReadLoop","detail":"{readStateIndex:343; appliedIndex:343; }","duration":"128.172639ms","start":"2025-11-01T09:42:32.116679Z","end":"2025-11-01T09:42:32.244852Z","steps":["trace[503508096] 'read index received'  (duration: 128.15113ms)","trace[503508096] 'applied index is now lower than readState.Index'  (duration: 9.965µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:42:32.306678Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"224.7845ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/no-preload-224845\" limit:1 ","response":"range_response_count:1 size:721"}
	{"level":"info","ts":"2025-11-01T09:42:32.306807Z","caller":"traceutil/trace.go:172","msg":"trace[1842851750] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"204.611533ms","start":"2025-11-01T09:42:32.102175Z","end":"2025-11-01T09:42:32.306787Z","steps":["trace[1842851750] 'process raft request'  (duration: 142.700659ms)","trace[1842851750] 'compare'  (duration: 61.710309ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:42:32.306884Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.398112ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:42:32.306823Z","caller":"traceutil/trace.go:172","msg":"trace[1821838713] range","detail":"{range_begin:/registry/csinodes/no-preload-224845; range_end:; response_count:1; response_revision:332; }","duration":"224.951038ms","start":"2025-11-01T09:42:32.081857Z","end":"2025-11-01T09:42:32.306808Z","steps":["trace[1821838713] 'agreement among raft nodes before linearized reading'  (duration: 162.998161ms)","trace[1821838713] 'range keys from in-memory index tree'  (duration: 61.66369ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:42:32.306903Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.438197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-11-01T09:42:32.306953Z","caller":"traceutil/trace.go:172","msg":"trace[1848598473] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:333; }","duration":"129.457604ms","start":"2025-11-01T09:42:32.177474Z","end":"2025-11-01T09:42:32.306932Z","steps":["trace[1848598473] 'agreement among raft nodes before linearized reading'  (duration: 129.371324ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:42:32.306959Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"179.510005ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" limit:1 ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-11-01T09:42:32.306977Z","caller":"traceutil/trace.go:172","msg":"trace[1556281696] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:333; }","duration":"129.519783ms","start":"2025-11-01T09:42:32.177447Z","end":"2025-11-01T09:42:32.306966Z","steps":["trace[1556281696] 'agreement among raft nodes before linearized reading'  (duration: 129.347728ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:42:32.306990Z","caller":"traceutil/trace.go:172","msg":"trace[700384509] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:333; }","duration":"179.541299ms","start":"2025-11-01T09:42:32.127438Z","end":"2025-11-01T09:42:32.306980Z","steps":["trace[700384509] 'agreement among raft nodes before linearized reading'  (duration: 179.426993ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:42:32.306699Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"229.632966ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-01T09:42:32.307054Z","caller":"traceutil/trace.go:172","msg":"trace[2003527593] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:332; }","duration":"230.001ms","start":"2025-11-01T09:42:32.077045Z","end":"2025-11-01T09:42:32.307046Z","steps":["trace[2003527593] 'agreement among raft nodes before linearized reading'  (duration: 167.893971ms)","trace[2003527593] 'range keys from in-memory index tree'  (duration: 61.580407ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:43:01 up  1:25,  0 user,  load average: 5.76, 4.64, 2.89
	Linux no-preload-224845 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ea6ba18480f15afb1035909008c0f17cd3fa41544084196f9a53af12b856be5a] <==
	I1101 09:42:36.345674       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:42:36.345983       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:42:36.346147       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:42:36.346166       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:42:36.346194       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:42:36Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:42:36.549514       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:42:36.549564       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:42:36.549576       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:42:36.644537       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:42:36.943106       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:42:36.943139       1 metrics.go:72] Registering metrics
	I1101 09:42:36.943598       1 controller.go:711] "Syncing nftables rules"
	I1101 09:42:46.554235       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:42:46.554312       1 main.go:301] handling current node
	I1101 09:42:56.552015       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:42:56.552061       1 main.go:301] handling current node
	
	
	==> kube-apiserver [078cba9eb5b472fbf26aa146f83c41a90d71390b49faf6f778ea37b60e74c84f] <==
	I1101 09:42:25.002547       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:42:25.005026       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:42:25.008123       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:42:25.008868       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:42:25.014857       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:42:25.033576       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:42:25.036428       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:42:25.906833       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:42:25.910823       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:42:25.910843       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:42:26.422463       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:42:26.462477       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:42:26.513183       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:42:26.520274       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1101 09:42:26.521518       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:42:26.526492       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:42:27.080563       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:42:27.575352       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:42:27.587029       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:42:27.596734       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:42:32.886762       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:42:32.936188       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:42:32.954023       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:42:33.134431       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 09:42:59.632781       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:56922: use of closed network connection
	
	
	==> kube-controller-manager [33f07d675b754c7b2be581b016ecfc0a790a5da2f876ba47ff732f2c4dbf84d8] <==
	I1101 09:42:32.074201       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:42:32.078807       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:42:32.080036       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:42:32.080038       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:42:32.080100       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:42:32.080107       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:42:32.080468       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:42:32.081544       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:42:32.081746       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:42:32.081836       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:42:32.081950       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:42:32.082940       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:42:32.085516       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:42:32.085581       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:42:32.086646       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:42:32.092963       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:42:32.098411       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:42:32.098538       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:42:32.098575       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:42:32.098582       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:42:32.098588       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:42:32.100706       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:42:32.102009       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:42:32.308347       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-224845" podCIDRs=["10.244.0.0/24"]
	I1101 09:42:47.032571       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c71e3ecc042a954ed687a5b0a864c00874f7b37ff45ae83835f1c3260d3f0b6c] <==
	I1101 09:42:33.833109       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:42:33.925538       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:42:34.026435       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:42:34.026478       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:42:34.026606       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:42:34.054900       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:42:34.054988       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:42:34.063089       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:42:34.063520       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:42:34.063586       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:42:34.066439       1 config.go:200] "Starting service config controller"
	I1101 09:42:34.066613       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:42:34.066638       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:42:34.066577       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:42:34.066830       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:42:34.067603       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:42:34.067152       1 config.go:309] "Starting node config controller"
	I1101 09:42:34.067641       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:42:34.067649       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:42:34.166834       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:42:34.166869       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:42:34.168337       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cc205c134a279880aec8609ba667cad39c93ce486625032c02c1ff3ace533759] <==
	E1101 09:42:24.966794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:42:24.966795       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:42:24.966901       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:42:24.966950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:42:24.966978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:42:24.967063       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:42:24.967307       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:42:24.967163       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:42:24.967471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:42:24.967604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:42:24.967632       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:42:24.967706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:42:24.967782       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:42:24.967972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:42:24.968007       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:42:24.968299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:42:24.968744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:42:25.787550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:42:25.946497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:42:25.985648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:42:26.113285       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:42:26.137502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:42:26.146772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:42:26.351784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:42:28.461409       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:42:28 no-preload-224845 kubelet[2292]: E1101 09:42:28.490905    2292 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-no-preload-224845\" already exists" pod="kube-system/etcd-no-preload-224845"
	Nov 01 09:42:28 no-preload-224845 kubelet[2292]: I1101 09:42:28.525449    2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-224845" podStartSLOduration=1.5254218160000002 podStartE2EDuration="1.525421816s" podCreationTimestamp="2025-11-01 09:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:28.510893662 +0000 UTC m=+1.150836215" watchObservedRunningTime="2025-11-01 09:42:28.525421816 +0000 UTC m=+1.165364382"
	Nov 01 09:42:28 no-preload-224845 kubelet[2292]: I1101 09:42:28.537935    2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-224845" podStartSLOduration=1.537900928 podStartE2EDuration="1.537900928s" podCreationTimestamp="2025-11-01 09:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:28.525741578 +0000 UTC m=+1.165684132" watchObservedRunningTime="2025-11-01 09:42:28.537900928 +0000 UTC m=+1.177843480"
	Nov 01 09:42:28 no-preload-224845 kubelet[2292]: I1101 09:42:28.548528    2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-224845" podStartSLOduration=1.548503625 podStartE2EDuration="1.548503625s" podCreationTimestamp="2025-11-01 09:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:28.53832169 +0000 UTC m=+1.178264243" watchObservedRunningTime="2025-11-01 09:42:28.548503625 +0000 UTC m=+1.188446178"
	Nov 01 09:42:28 no-preload-224845 kubelet[2292]: I1101 09:42:28.563451    2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-224845" podStartSLOduration=1.5634294290000001 podStartE2EDuration="1.563429429s" podCreationTimestamp="2025-11-01 09:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:28.548603215 +0000 UTC m=+1.188545768" watchObservedRunningTime="2025-11-01 09:42:28.563429429 +0000 UTC m=+1.203371981"
	Nov 01 09:42:32 no-preload-224845 kubelet[2292]: I1101 09:42:32.314952    2292 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:42:32 no-preload-224845 kubelet[2292]: I1101 09:42:32.316396    2292 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:42:33 no-preload-224845 kubelet[2292]: I1101 09:42:33.282123    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2eaa5d33-a0e9-48ae-9399-6091171d4819-kube-proxy\") pod \"kube-proxy-f2f64\" (UID: \"2eaa5d33-a0e9-48ae-9399-6091171d4819\") " pod="kube-system/kube-proxy-f2f64"
	Nov 01 09:42:33 no-preload-224845 kubelet[2292]: I1101 09:42:33.284824    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2eaa5d33-a0e9-48ae-9399-6091171d4819-xtables-lock\") pod \"kube-proxy-f2f64\" (UID: \"2eaa5d33-a0e9-48ae-9399-6091171d4819\") " pod="kube-system/kube-proxy-f2f64"
	Nov 01 09:42:33 no-preload-224845 kubelet[2292]: I1101 09:42:33.286753    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/051e058a-53bf-4ab6-be63-7a2bda397004-cni-cfg\") pod \"kindnet-24485\" (UID: \"051e058a-53bf-4ab6-be63-7a2bda397004\") " pod="kube-system/kindnet-24485"
	Nov 01 09:42:33 no-preload-224845 kubelet[2292]: I1101 09:42:33.287060    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lld48\" (UniqueName: \"kubernetes.io/projected/051e058a-53bf-4ab6-be63-7a2bda397004-kube-api-access-lld48\") pod \"kindnet-24485\" (UID: \"051e058a-53bf-4ab6-be63-7a2bda397004\") " pod="kube-system/kindnet-24485"
	Nov 01 09:42:33 no-preload-224845 kubelet[2292]: I1101 09:42:33.287191    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2eaa5d33-a0e9-48ae-9399-6091171d4819-lib-modules\") pod \"kube-proxy-f2f64\" (UID: \"2eaa5d33-a0e9-48ae-9399-6091171d4819\") " pod="kube-system/kube-proxy-f2f64"
	Nov 01 09:42:33 no-preload-224845 kubelet[2292]: I1101 09:42:33.292043    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xv7n\" (UniqueName: \"kubernetes.io/projected/2eaa5d33-a0e9-48ae-9399-6091171d4819-kube-api-access-9xv7n\") pod \"kube-proxy-f2f64\" (UID: \"2eaa5d33-a0e9-48ae-9399-6091171d4819\") " pod="kube-system/kube-proxy-f2f64"
	Nov 01 09:42:33 no-preload-224845 kubelet[2292]: I1101 09:42:33.292295    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/051e058a-53bf-4ab6-be63-7a2bda397004-lib-modules\") pod \"kindnet-24485\" (UID: \"051e058a-53bf-4ab6-be63-7a2bda397004\") " pod="kube-system/kindnet-24485"
	Nov 01 09:42:33 no-preload-224845 kubelet[2292]: I1101 09:42:33.292328    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/051e058a-53bf-4ab6-be63-7a2bda397004-xtables-lock\") pod \"kindnet-24485\" (UID: \"051e058a-53bf-4ab6-be63-7a2bda397004\") " pod="kube-system/kindnet-24485"
	Nov 01 09:42:34 no-preload-224845 kubelet[2292]: I1101 09:42:34.506172    2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f2f64" podStartSLOduration=1.5061499170000001 podStartE2EDuration="1.506149917s" podCreationTimestamp="2025-11-01 09:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:34.506081326 +0000 UTC m=+7.146023879" watchObservedRunningTime="2025-11-01 09:42:34.506149917 +0000 UTC m=+7.146092470"
	Nov 01 09:42:36 no-preload-224845 kubelet[2292]: I1101 09:42:36.516344    2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-24485" podStartSLOduration=1.007440496 podStartE2EDuration="3.51632206s" podCreationTimestamp="2025-11-01 09:42:33 +0000 UTC" firstStartedPulling="2025-11-01 09:42:33.520684097 +0000 UTC m=+6.160626628" lastFinishedPulling="2025-11-01 09:42:36.029565644 +0000 UTC m=+8.669508192" observedRunningTime="2025-11-01 09:42:36.516108607 +0000 UTC m=+9.156051164" watchObservedRunningTime="2025-11-01 09:42:36.51632206 +0000 UTC m=+9.156264613"
	Nov 01 09:42:46 no-preload-224845 kubelet[2292]: I1101 09:42:46.979226    2292 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:42:47 no-preload-224845 kubelet[2292]: I1101 09:42:47.110218    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/caada0b2-96bb-49a3-84ca-d863dbc868f5-tmp\") pod \"storage-provisioner\" (UID: \"caada0b2-96bb-49a3-84ca-d863dbc868f5\") " pod="kube-system/storage-provisioner"
	Nov 01 09:42:47 no-preload-224845 kubelet[2292]: I1101 09:42:47.110271    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6a237a12-aaf7-47b2-abd8-2af3fc8486e3-config-volume\") pod \"coredns-66bc5c9577-8qn69\" (UID: \"6a237a12-aaf7-47b2-abd8-2af3fc8486e3\") " pod="kube-system/coredns-66bc5c9577-8qn69"
	Nov 01 09:42:47 no-preload-224845 kubelet[2292]: I1101 09:42:47.110296    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzc7m\" (UniqueName: \"kubernetes.io/projected/6a237a12-aaf7-47b2-abd8-2af3fc8486e3-kube-api-access-zzc7m\") pod \"coredns-66bc5c9577-8qn69\" (UID: \"6a237a12-aaf7-47b2-abd8-2af3fc8486e3\") " pod="kube-system/coredns-66bc5c9577-8qn69"
	Nov 01 09:42:47 no-preload-224845 kubelet[2292]: I1101 09:42:47.110326    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkj8k\" (UniqueName: \"kubernetes.io/projected/caada0b2-96bb-49a3-84ca-d863dbc868f5-kube-api-access-pkj8k\") pod \"storage-provisioner\" (UID: \"caada0b2-96bb-49a3-84ca-d863dbc868f5\") " pod="kube-system/storage-provisioner"
	Nov 01 09:42:47 no-preload-224845 kubelet[2292]: I1101 09:42:47.540931    2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8qn69" podStartSLOduration=14.540893253 podStartE2EDuration="14.540893253s" podCreationTimestamp="2025-11-01 09:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:47.540774201 +0000 UTC m=+20.180716753" watchObservedRunningTime="2025-11-01 09:42:47.540893253 +0000 UTC m=+20.180835806"
	Nov 01 09:42:47 no-preload-224845 kubelet[2292]: I1101 09:42:47.565175    2292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.565137074999999 podStartE2EDuration="13.565137075s" podCreationTimestamp="2025-11-01 09:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:47.551834411 +0000 UTC m=+20.191776963" watchObservedRunningTime="2025-11-01 09:42:47.565137075 +0000 UTC m=+20.205079627"
	Nov 01 09:42:49 no-preload-224845 kubelet[2292]: I1101 09:42:49.630045    2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knnxq\" (UniqueName: \"kubernetes.io/projected/9e4e4413-d3b7-4a5f-b088-241e94f310a4-kube-api-access-knnxq\") pod \"busybox\" (UID: \"9e4e4413-d3b7-4a5f-b088-241e94f310a4\") " pod="default/busybox"
	
	
	==> storage-provisioner [a9505db735397d8e0f5c346f8e90c8327d446b5f3f9f2fb4191e86f42ac9fc5d] <==
	I1101 09:42:47.381723       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:42:47.392823       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:42:47.392877       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:42:47.395537       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:47.402018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:42:47.402201       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:42:47.402363       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-224845_9af868f2-c571-41d6-95aa-3db8c85e94fb!
	I1101 09:42:47.402338       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4838c3ac-e532-4877-9b16-b80d4afab202", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-224845_9af868f2-c571-41d6-95aa-3db8c85e94fb became leader
	W1101 09:42:47.407042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:47.411313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:42:47.502852       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-224845_9af868f2-c571-41d6-95aa-3db8c85e94fb!
	W1101 09:42:49.414791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:49.420068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:51.422974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:51.427256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:53.431043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:53.435382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:55.439504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:55.447032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:57.451092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:57.455828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:59.459755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:42:59.464798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:01.468308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:01.473371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-224845 -n no-preload-224845
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-224845 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.265397ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:43:14Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-214580 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-214580 describe deploy/metrics-server -n kube-system: exit status 1 (62.893375ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-214580 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-214580
helpers_test.go:243: (dbg) docker inspect embed-certs-214580:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0",
	        "Created": "2025-11-01T09:42:23.57612126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 399190,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:42:23.641088791Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/hosts",
	        "LogPath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0-json.log",
	        "Name": "/embed-certs-214580",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-214580:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-214580",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0",
	                "LowerDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-214580",
	                "Source": "/var/lib/docker/volumes/embed-certs-214580/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-214580",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-214580",
	                "name.minikube.sigs.k8s.io": "embed-certs-214580",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0a783e46df28e45e569727b078b42251498ebc4347260035569410d89225e59c",
	            "SandboxKey": "/var/run/docker/netns/0a783e46df28",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-214580": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:1c:29:17:6d:37",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef396acdcfefe4b7ce9bad3abfa4446d31948191a9bcabcff15b305b8fa3a9ee",
	                    "EndpointID": "e62a8fc038fbdbf1948471687ea762b6c9cced4e62a550d24a5580c2c684dc7d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-214580",
	                        "7217dfc1b74f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-214580 -n embed-certs-214580
E1101 09:43:14.456991  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-214580 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-214580 logs -n 25: (1.135757714s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-307390 sudo docker system info                                                                                                                                                                                              │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                        │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                  │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cri-dockerd --version                                                                                                                                                                                           │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo crio config                                                                                                                                                                                                     │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p custom-flannel-307390                                                                                                                                                                                                                      │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p disable-driver-mounts-309397                                                                                                                                                                                                               │ disable-driver-mounts-309397 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-106430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p old-k8s-version-106430 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-106430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p no-preload-224845 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:42:50
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:42:50.614027  406120 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:42:50.614344  406120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:42:50.614355  406120 out.go:374] Setting ErrFile to fd 2...
	I1101 09:42:50.614360  406120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:42:50.614709  406120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:42:50.615372  406120 out.go:368] Setting JSON to false
	I1101 09:42:50.616698  406120 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5109,"bootTime":1761985062,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:42:50.616802  406120 start.go:143] virtualization: kvm guest
	I1101 09:42:50.619836  406120 out.go:179] * [old-k8s-version-106430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:42:50.621872  406120 notify.go:221] Checking for updates...
	I1101 09:42:50.621888  406120 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:42:50.628674  406120 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:42:50.630225  406120 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:50.631472  406120 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:42:50.632961  406120 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:42:50.634435  406120 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:42:50.638045  406120 config.go:182] Loaded profile config "old-k8s-version-106430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:42:50.641595  406120 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 09:42:50.642850  406120 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:42:50.683326  406120 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:42:50.683460  406120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:42:50.783005  406120 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:42:50.759487501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:42:50.783150  406120 docker.go:319] overlay module found
	I1101 09:42:50.787488  406120 out.go:179] * Using the docker driver based on existing profile
	I1101 09:42:50.788572  406120 start.go:309] selected driver: docker
	I1101 09:42:50.788595  406120 start.go:930] validating driver "docker" against &{Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:42:50.788779  406120 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:42:50.789528  406120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:42:50.912241  406120 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:42:50.89413684 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:42:50.912615  406120 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:42:50.912665  406120 cni.go:84] Creating CNI manager for ""
	I1101 09:42:50.912718  406120 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:42:50.912765  406120 start.go:353] cluster config:
	{Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:42:50.914622  406120 out.go:179] * Starting "old-k8s-version-106430" primary control-plane node in "old-k8s-version-106430" cluster
	I1101 09:42:50.915954  406120 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:42:50.917379  406120 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:42:50.918514  406120 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:42:50.918573  406120 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:42:50.918590  406120 cache.go:59] Caching tarball of preloaded images
	I1101 09:42:50.918663  406120 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:42:50.918693  406120 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:42:50.918906  406120 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 09:42:50.919196  406120 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/config.json ...
	I1101 09:42:50.948106  406120 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:42:50.948138  406120 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:42:50.948154  406120 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:42:50.948189  406120 start.go:360] acquireMachinesLock for old-k8s-version-106430: {Name:mk47cab1e1fd681dae6862a843f54c2590f288ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:42:50.948282  406120 start.go:364] duration metric: took 39.062µs to acquireMachinesLock for "old-k8s-version-106430"
	I1101 09:42:50.948308  406120 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:42:50.948318  406120 fix.go:54] fixHost starting: 
	I1101 09:42:50.948612  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:50.972291  406120 fix.go:112] recreateIfNeeded on old-k8s-version-106430: state=Stopped err=<nil>
	W1101 09:42:50.972324  406120 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:42:50.531203  396593 addons.go:239] Setting addon default-storageclass=true in "embed-certs-214580"
	I1101 09:42:50.531229  396593 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:50.531249  396593 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:42:50.531260  396593 host.go:66] Checking if "embed-certs-214580" exists ...
	I1101 09:42:50.531310  396593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:42:50.533063  396593 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:42:50.568680  396593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:42:50.568816  396593 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:50.568869  396593 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:42:50.568974  396593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:42:50.601158  396593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:42:50.613732  396593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:42:50.676436  396593 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:42:50.697529  396593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:50.737333  396593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:50.882009  396593 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1101 09:42:50.886149  396593 node_ready.go:35] waiting up to 6m0s for node "embed-certs-214580" to be "Ready" ...
	I1101 09:42:51.161964  396593 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:42:51.163280  396593 addons.go:515] duration metric: took 670.456657ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:42:51.388894  396593 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-214580" context rescaled to 1 replicas
	I1101 09:42:49.376252  400655 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:42:49.383040  400655 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:42:49.383063  400655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:42:49.398858  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:42:49.653613  400655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:42:49.653808  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-927869 minikube.k8s.io/updated_at=2025_11_01T09_42_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=default-k8s-diff-port-927869 minikube.k8s.io/primary=true
	I1101 09:42:49.653892  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:49.665596  400655 ops.go:34] apiserver oom_adj: -16
	I1101 09:42:49.749820  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:50.250114  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:50.750585  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:51.250809  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:51.750006  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:52.250183  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:52.750028  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:53.250186  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:53.749999  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:53.822757  400655 kubeadm.go:1114] duration metric: took 4.168926091s to wait for elevateKubeSystemPrivileges
	I1101 09:42:53.822793  400655 kubeadm.go:403] duration metric: took 15.047661715s to StartCluster
	I1101 09:42:53.822817  400655 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:53.822903  400655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:53.824503  400655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:53.824773  400655 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:42:53.824788  400655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:42:53.824818  400655 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:42:53.824999  400655 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-927869"
	I1101 09:42:53.825027  400655 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-927869"
	I1101 09:42:53.825039  400655 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-927869"
	I1101 09:42:53.825063  400655 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:42:53.825088  400655 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-927869"
	I1101 09:42:53.825051  400655 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:42:53.825501  400655 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:42:53.825647  400655 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:42:53.828120  400655 out.go:179] * Verifying Kubernetes components...
	I1101 09:42:53.829949  400655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:53.849634  400655 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-927869"
	I1101 09:42:53.849672  400655 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:42:53.850090  400655 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:42:53.850960  400655 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:42:53.852691  400655 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:53.852716  400655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:42:53.852783  400655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:42:53.882229  400655 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:53.882257  400655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:42:53.882320  400655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:42:53.885053  400655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:42:53.907017  400655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:42:53.935846  400655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:42:53.988391  400655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:42:54.013317  400655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:54.045518  400655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:54.149470  400655 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 09:42:54.151155  400655 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-927869" to be "Ready" ...
	I1101 09:42:54.374035  400655 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:42:50.974210  406120 out.go:252] * Restarting existing docker container for "old-k8s-version-106430" ...
	I1101 09:42:50.974286  406120 cli_runner.go:164] Run: docker start old-k8s-version-106430
	I1101 09:42:51.290157  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:51.314807  406120 kic.go:430] container "old-k8s-version-106430" state is running.
	I1101 09:42:51.315254  406120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-106430
	I1101 09:42:51.341531  406120 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/config.json ...
	I1101 09:42:51.341904  406120 machine.go:94] provisionDockerMachine start ...
	I1101 09:42:51.342010  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:51.365591  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:51.365960  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:51.365981  406120 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:42:51.366590  406120 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47468->127.0.0.1:33108: read: connection reset by peer
	I1101 09:42:54.518255  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-106430
	
	I1101 09:42:54.518288  406120 ubuntu.go:182] provisioning hostname "old-k8s-version-106430"
	I1101 09:42:54.518353  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:54.539831  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:54.540106  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:54.540129  406120 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-106430 && echo "old-k8s-version-106430" | sudo tee /etc/hostname
	I1101 09:42:54.702026  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-106430
	
	I1101 09:42:54.702114  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:54.724817  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:54.725136  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:54.725167  406120 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-106430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-106430/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-106430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:42:54.876787  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:42:54.876817  406120 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:42:54.876844  406120 ubuntu.go:190] setting up certificates
	I1101 09:42:54.876853  406120 provision.go:84] configureAuth start
	I1101 09:42:54.876906  406120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-106430
	I1101 09:42:54.896638  406120 provision.go:143] copyHostCerts
	I1101 09:42:54.896701  406120 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:42:54.896718  406120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:42:54.896786  406120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:42:54.896893  406120 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:42:54.896901  406120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:42:54.896956  406120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:42:54.897025  406120 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:42:54.897034  406120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:42:54.897058  406120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:42:54.897110  406120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-106430 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-106430]
	I1101 09:42:54.980885  406120 provision.go:177] copyRemoteCerts
	I1101 09:42:54.980976  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:42:54.981016  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.002790  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.107988  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:42:55.129045  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 09:42:55.148507  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:42:55.168604  406120 provision.go:87] duration metric: took 291.735137ms to configureAuth
	I1101 09:42:55.168634  406120 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:42:55.168849  406120 config.go:182] Loaded profile config "old-k8s-version-106430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:42:55.169027  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.187704  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:55.187966  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:55.187993  406120 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:42:55.497800  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:42:55.497831  406120 machine.go:97] duration metric: took 4.155886646s to provisionDockerMachine
	I1101 09:42:55.497846  406120 start.go:293] postStartSetup for "old-k8s-version-106430" (driver="docker")
	I1101 09:42:55.497860  406120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:42:55.497949  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:42:55.498013  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.519255  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.622520  406120 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:42:55.626564  406120 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:42:55.626626  406120 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:42:55.626647  406120 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:42:55.626715  406120 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:42:55.626812  406120 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:42:55.626948  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:42:55.635496  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:42:55.657654  406120 start.go:296] duration metric: took 159.790682ms for postStartSetup
	I1101 09:42:55.657758  406120 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:42:55.657821  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.676825  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.778028  406120 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:42:55.783383  406120 fix.go:56] duration metric: took 4.835054698s for fixHost
	I1101 09:42:55.783417  406120 start.go:83] releasing machines lock for "old-k8s-version-106430", held for 4.83512021s
	I1101 09:42:55.783495  406120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-106430
	I1101 09:42:55.804416  406120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:42:55.804456  406120 ssh_runner.go:195] Run: cat /version.json
	I1101 09:42:55.804492  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.804505  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.824353  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.824865  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.981055  406120 ssh_runner.go:195] Run: systemctl --version
	I1101 09:42:55.988383  406120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:42:56.025779  406120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:42:56.031204  406120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:42:56.031292  406120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:42:56.040425  406120 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:42:56.040454  406120 start.go:496] detecting cgroup driver to use...
	I1101 09:42:56.040493  406120 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:42:56.040550  406120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:42:56.056165  406120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:42:56.071243  406120 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:42:56.071318  406120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:42:56.087584  406120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:42:56.102101  406120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:42:56.185386  406120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:42:56.270398  406120 docker.go:234] disabling docker service ...
	I1101 09:42:56.270483  406120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:42:56.287689  406120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:42:56.302743  406120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:42:56.390775  406120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:42:56.477451  406120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:42:56.490747  406120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:42:56.507214  406120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 09:42:56.507281  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.518382  406120 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:42:56.518457  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.527846  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.539349  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.548816  406120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:42:56.557406  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.567380  406120 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.576904  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.586527  406120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:42:56.594509  406120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:42:56.602525  406120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:56.689010  406120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:42:56.807304  406120 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:42:56.807374  406120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:42:56.811774  406120 start.go:564] Will wait 60s for crictl version
	I1101 09:42:56.811826  406120 ssh_runner.go:195] Run: which crictl
	I1101 09:42:56.815686  406120 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:42:56.841111  406120 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:42:56.841202  406120 ssh_runner.go:195] Run: crio --version
	I1101 09:42:56.870245  406120 ssh_runner.go:195] Run: crio --version
	I1101 09:42:56.903409  406120 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	W1101 09:42:52.889922  396593 node_ready.go:57] node "embed-certs-214580" has "Ready":"False" status (will retry)
	W1101 09:42:54.890128  396593 node_ready.go:57] node "embed-certs-214580" has "Ready":"False" status (will retry)
	W1101 09:42:57.390285  396593 node_ready.go:57] node "embed-certs-214580" has "Ready":"False" status (will retry)
	I1101 09:42:56.904675  406120 cli_runner.go:164] Run: docker network inspect old-k8s-version-106430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:42:56.922956  406120 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 09:42:56.927507  406120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:42:56.938255  406120 kubeadm.go:884] updating cluster {Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:42:56.938367  406120 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:42:56.938406  406120 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:42:56.972069  406120 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:42:56.972094  406120 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:42:56.972148  406120 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:42:57.002691  406120 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:42:57.002716  406120 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:42:57.002725  406120 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1101 09:42:57.002856  406120 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-106430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:42:57.002967  406120 ssh_runner.go:195] Run: crio config
	I1101 09:42:57.051562  406120 cni.go:84] Creating CNI manager for ""
	I1101 09:42:57.051580  406120 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:42:57.051594  406120 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:42:57.051624  406120 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-106430 NodeName:old-k8s-version-106430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:42:57.051795  406120 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-106430"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:42:57.051865  406120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 09:42:57.060477  406120 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:42:57.060538  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:42:57.069511  406120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1101 09:42:57.083613  406120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:42:57.097812  406120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1101 09:42:57.111580  406120 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:42:57.115488  406120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:42:57.126011  406120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:57.213189  406120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:42:57.238996  406120 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430 for IP: 192.168.103.2
	I1101 09:42:57.239022  406120 certs.go:195] generating shared ca certs ...
	I1101 09:42:57.239045  406120 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:57.239236  406120 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:42:57.239286  406120 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:42:57.239299  406120 certs.go:257] generating profile certs ...
	I1101 09:42:57.239410  406120 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/client.key
	I1101 09:42:57.239470  406120 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/apiserver.key.08895b71
	I1101 09:42:57.239520  406120 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/proxy-client.key
	I1101 09:42:57.239670  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:42:57.239711  406120 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:42:57.239721  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:42:57.239755  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:42:57.239792  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:42:57.239816  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:42:57.239872  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:42:57.240646  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:42:57.261275  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:42:57.280725  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:42:57.302620  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:42:57.324849  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:42:57.346130  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:42:57.364807  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:42:57.382889  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:42:57.401595  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:42:57.420604  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:42:57.440611  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:42:57.460759  406120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:42:57.475075  406120 ssh_runner.go:195] Run: openssl version
	I1101 09:42:57.482420  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:42:57.491762  406120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:42:57.496929  406120 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:42:57.497002  406120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:42:57.536750  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:42:57.545765  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:42:57.554820  406120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:42:57.559339  406120 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:42:57.559405  406120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:42:57.598430  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:42:57.607527  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:42:57.616648  406120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:42:57.620647  406120 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:42:57.620708  406120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:42:57.659548  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:42:57.668681  406120 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:42:57.672696  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:42:57.708768  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:42:57.747304  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:42:57.796024  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:42:57.836290  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:42:57.889574  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:42:57.936314  406120 kubeadm.go:401] StartCluster: {Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:42:57.936438  406120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:42:57.936498  406120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:42:57.970374  406120 cri.go:89] found id: "67383aa07ea5a571b5780306e02b652d4100444e7d3375f13add5b076ff05a91"
	I1101 09:42:57.970398  406120 cri.go:89] found id: "21c9e16bfcb6f8965fbdbbf8b9f68b535b2252e3a9d58fe71811900f43d0178a"
	I1101 09:42:57.970403  406120 cri.go:89] found id: "227f629919dddfb2b5ef168af9cb9b28faa37ce01740e96b97f11cdff132e1a4"
	I1101 09:42:57.970408  406120 cri.go:89] found id: "2879f0fdda15ae5930efa2d324aedc5144c2f63543dc974f06fa3e3168b46588"
	I1101 09:42:57.970412  406120 cri.go:89] found id: ""
	I1101 09:42:57.970460  406120 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:42:57.984075  406120 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:42:57Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:42:57.984151  406120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:42:57.993097  406120 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:42:57.993119  406120 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:42:57.993172  406120 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:42:58.001723  406120 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:42:58.003096  406120 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-106430" does not appear in /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:58.004036  406120 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-104443/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-106430" cluster setting kubeconfig missing "old-k8s-version-106430" context setting]
	I1101 09:42:58.005461  406120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:58.007975  406120 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:42:58.016714  406120 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1101 09:42:58.016755  406120 kubeadm.go:602] duration metric: took 23.628873ms to restartPrimaryControlPlane
	I1101 09:42:58.016767  406120 kubeadm.go:403] duration metric: took 80.466912ms to StartCluster
	I1101 09:42:58.016787  406120 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:58.016859  406120 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:58.019146  406120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:58.019406  406120 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:42:54.375638  400655 addons.go:515] duration metric: took 550.809564ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:42:54.654510  400655 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-927869" context rescaled to 1 replicas
	W1101 09:42:56.154636  400655 node_ready.go:57] node "default-k8s-diff-port-927869" has "Ready":"False" status (will retry)
	I1101 09:42:58.019624  406120 config.go:182] Loaded profile config "old-k8s-version-106430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:42:58.019516  406120 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:42:58.019681  406120 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-106430"
	I1101 09:42:58.019692  406120 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-106430"
	W1101 09:42:58.019698  406120 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:42:58.019725  406120 host.go:66] Checking if "old-k8s-version-106430" exists ...
	I1101 09:42:58.019740  406120 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-106430"
	I1101 09:42:58.019723  406120 addons.go:70] Setting dashboard=true in profile "old-k8s-version-106430"
	I1101 09:42:58.019762  406120 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-106430"
	I1101 09:42:58.019781  406120 addons.go:239] Setting addon dashboard=true in "old-k8s-version-106430"
	W1101 09:42:58.019793  406120 addons.go:248] addon dashboard should already be in state true
	I1101 09:42:58.019834  406120 host.go:66] Checking if "old-k8s-version-106430" exists ...
	I1101 09:42:58.020108  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.020244  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.020320  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.022567  406120 out.go:179] * Verifying Kubernetes components...
	I1101 09:42:58.024092  406120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:58.047043  406120 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-106430"
	W1101 09:42:58.047078  406120 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:42:58.047127  406120 host.go:66] Checking if "old-k8s-version-106430" exists ...
	I1101 09:42:58.047350  406120 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:42:58.048017  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.048837  406120 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:58.048858  406120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:42:58.048940  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:58.051652  406120 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:42:58.052846  406120 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:42:58.053934  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:42:58.053961  406120 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:42:58.054033  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:58.088042  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:58.088630  406120 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:58.088657  406120 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:42:58.088715  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:58.089200  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:58.115133  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:58.176448  406120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:42:58.191014  406120 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-106430" to be "Ready" ...
	I1101 09:42:58.206351  406120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:58.206583  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:42:58.206597  406120 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:42:58.222704  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:42:58.222727  406120 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:42:58.234887  406120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:58.238967  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:42:58.238997  406120 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:42:58.255653  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:42:58.255679  406120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:42:58.272580  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:42:58.272613  406120 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:42:58.290112  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:42:58.290144  406120 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:42:58.310526  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:42:58.310555  406120 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:42:58.330234  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:42:58.330264  406120 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:42:58.346642  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:42:58.346672  406120 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:42:58.360999  406120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:43:00.187111  406120 node_ready.go:49] node "old-k8s-version-106430" is "Ready"
	I1101 09:43:00.187161  406120 node_ready.go:38] duration metric: took 1.996099939s for node "old-k8s-version-106430" to be "Ready" ...
	I1101 09:43:00.187179  406120 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:43:00.187255  406120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:43:01.024412  406120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.818015558s)
	I1101 09:43:01.024466  406120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.789491554s)
	I1101 09:43:01.442889  406120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.081846516s)
	I1101 09:43:01.442959  406120 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.255677206s)
	I1101 09:43:01.442996  406120 api_server.go:72] duration metric: took 3.423552979s to wait for apiserver process to appear ...
	I1101 09:43:01.443069  406120 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:43:01.443095  406120 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:43:01.444371  406120 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-106430 addons enable metrics-server
	
	I1101 09:43:01.445665  406120 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1101 09:42:59.890173  396593 node_ready.go:57] node "embed-certs-214580" has "Ready":"False" status (will retry)
	I1101 09:43:01.394076  396593 node_ready.go:49] node "embed-certs-214580" is "Ready"
	I1101 09:43:01.394119  396593 node_ready.go:38] duration metric: took 10.507924999s for node "embed-certs-214580" to be "Ready" ...
	I1101 09:43:01.394138  396593 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:43:01.394196  396593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:43:01.411195  396593 api_server.go:72] duration metric: took 10.918334459s to wait for apiserver process to appear ...
	I1101 09:43:01.411227  396593 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:43:01.411253  396593 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:43:01.417293  396593 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 09:43:01.418493  396593 api_server.go:141] control plane version: v1.34.1
	I1101 09:43:01.418528  396593 api_server.go:131] duration metric: took 7.293707ms to wait for apiserver health ...
	I1101 09:43:01.418538  396593 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:43:01.422348  396593 system_pods.go:59] 8 kube-system pods found
	I1101 09:43:01.422388  396593 system_pods.go:61] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:01.422397  396593 system_pods.go:61] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running
	I1101 09:43:01.422406  396593 system_pods.go:61] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running
	I1101 09:43:01.422411  396593 system_pods.go:61] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running
	I1101 09:43:01.422416  396593 system_pods.go:61] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running
	I1101 09:43:01.422419  396593 system_pods.go:61] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running
	I1101 09:43:01.422423  396593 system_pods.go:61] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running
	I1101 09:43:01.422429  396593 system_pods.go:61] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:01.422437  396593 system_pods.go:74] duration metric: took 3.892949ms to wait for pod list to return data ...
	I1101 09:43:01.422451  396593 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:43:01.425015  396593 default_sa.go:45] found service account: "default"
	I1101 09:43:01.425041  396593 default_sa.go:55] duration metric: took 2.583837ms for default service account to be created ...
	I1101 09:43:01.425054  396593 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:43:01.428957  396593 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:01.428995  396593 system_pods.go:89] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:01.429003  396593 system_pods.go:89] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running
	I1101 09:43:01.429012  396593 system_pods.go:89] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running
	I1101 09:43:01.429038  396593 system_pods.go:89] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running
	I1101 09:43:01.429048  396593 system_pods.go:89] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running
	I1101 09:43:01.429053  396593 system_pods.go:89] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running
	I1101 09:43:01.429062  396593 system_pods.go:89] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running
	I1101 09:43:01.429071  396593 system_pods.go:89] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:01.429119  396593 retry.go:31] will retry after 227.328238ms: missing components: kube-dns
	I1101 09:43:01.662291  396593 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:01.662326  396593 system_pods.go:89] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:01.662331  396593 system_pods.go:89] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running
	I1101 09:43:01.662337  396593 system_pods.go:89] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running
	I1101 09:43:01.662341  396593 system_pods.go:89] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running
	I1101 09:43:01.662345  396593 system_pods.go:89] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running
	I1101 09:43:01.663085  396593 system_pods.go:89] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running
	I1101 09:43:01.663107  396593 system_pods.go:89] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running
	I1101 09:43:01.663119  396593 system_pods.go:89] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:01.663159  396593 retry.go:31] will retry after 276.226658ms: missing components: kube-dns
	I1101 09:43:01.944474  396593 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:01.944505  396593 system_pods.go:89] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Running
	I1101 09:43:01.944510  396593 system_pods.go:89] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running
	I1101 09:43:01.944516  396593 system_pods.go:89] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running
	I1101 09:43:01.944520  396593 system_pods.go:89] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running
	I1101 09:43:01.944523  396593 system_pods.go:89] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running
	I1101 09:43:01.944526  396593 system_pods.go:89] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running
	I1101 09:43:01.944540  396593 system_pods.go:89] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running
	I1101 09:43:01.944543  396593 system_pods.go:89] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Running
	I1101 09:43:01.944551  396593 system_pods.go:126] duration metric: took 519.491033ms to wait for k8s-apps to be running ...
	I1101 09:43:01.944559  396593 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:43:01.944612  396593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:01.961365  396593 system_svc.go:56] duration metric: took 16.790691ms WaitForService to wait for kubelet
	I1101 09:43:01.961443  396593 kubeadm.go:587] duration metric: took 11.468588724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:01.961481  396593 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:43:01.965235  396593 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:43:01.965267  396593 node_conditions.go:123] node cpu capacity is 8
	I1101 09:43:01.965281  396593 node_conditions.go:105] duration metric: took 3.794232ms to run NodePressure ...
	I1101 09:43:01.965293  396593 start.go:242] waiting for startup goroutines ...
	I1101 09:43:01.965300  396593 start.go:247] waiting for cluster config update ...
	I1101 09:43:01.965311  396593 start.go:256] writing updated cluster config ...
	I1101 09:43:01.965628  396593 ssh_runner.go:195] Run: rm -f paused
	I1101 09:43:01.970967  396593 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:01.976406  396593 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cmnj8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:01.982287  396593 pod_ready.go:94] pod "coredns-66bc5c9577-cmnj8" is "Ready"
	I1101 09:43:01.982321  396593 pod_ready.go:86] duration metric: took 5.884528ms for pod "coredns-66bc5c9577-cmnj8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:01.985036  396593 pod_ready.go:83] waiting for pod "etcd-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:01.990273  396593 pod_ready.go:94] pod "etcd-embed-certs-214580" is "Ready"
	I1101 09:43:01.990301  396593 pod_ready.go:86] duration metric: took 5.235782ms for pod "etcd-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:01.992691  396593 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:01.997959  396593 pod_ready.go:94] pod "kube-apiserver-embed-certs-214580" is "Ready"
	I1101 09:43:01.997981  396593 pod_ready.go:86] duration metric: took 5.265586ms for pod "kube-apiserver-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:02.000124  396593 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:02.377242  396593 pod_ready.go:94] pod "kube-controller-manager-embed-certs-214580" is "Ready"
	I1101 09:43:02.377272  396593 pod_ready.go:86] duration metric: took 377.124261ms for pod "kube-controller-manager-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:42:58.654079  400655 node_ready.go:57] node "default-k8s-diff-port-927869" has "Ready":"False" status (will retry)
	W1101 09:43:00.658594  400655 node_ready.go:57] node "default-k8s-diff-port-927869" has "Ready":"False" status (will retry)
	I1101 09:43:02.576590  396593 pod_ready.go:83] waiting for pod "kube-proxy-49j45" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:02.976290  396593 pod_ready.go:94] pod "kube-proxy-49j45" is "Ready"
	I1101 09:43:02.976316  396593 pod_ready.go:86] duration metric: took 399.691169ms for pod "kube-proxy-49j45" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:03.177497  396593 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:03.576578  396593 pod_ready.go:94] pod "kube-scheduler-embed-certs-214580" is "Ready"
	I1101 09:43:03.576609  396593 pod_ready.go:86] duration metric: took 399.080901ms for pod "kube-scheduler-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:03.576624  396593 pod_ready.go:40] duration metric: took 1.605614697s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:03.627748  396593 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:43:03.630280  396593 out.go:179] * Done! kubectl is now configured to use "embed-certs-214580" cluster and "default" namespace by default
	I1101 09:43:01.446674  406120 addons.go:515] duration metric: took 3.427169236s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 09:43:01.447744  406120 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 09:43:01.448983  406120 api_server.go:141] control plane version: v1.28.0
	I1101 09:43:01.449007  406120 api_server.go:131] duration metric: took 5.9302ms to wait for apiserver health ...
	I1101 09:43:01.449015  406120 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:43:01.453386  406120 system_pods.go:59] 8 kube-system pods found
	I1101 09:43:01.453440  406120 system_pods.go:61] "coredns-5dd5756b68-xh2rf" [2dc48063-a93a-46c9-b6da-451a12b954c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:01.453452  406120 system_pods.go:61] "etcd-old-k8s-version-106430" [6f7386a3-1337-464f-a414-cd3c59f37e83] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:01.453460  406120 system_pods.go:61] "kindnet-5v6hn" [68338c9c-3108-4c9f-8fed-214858c90ef5] Running
	I1101 09:43:01.453468  406120 system_pods.go:61] "kube-apiserver-old-k8s-version-106430" [c2554645-936b-4a63-8090-580b3bef9961] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:01.453475  406120 system_pods.go:61] "kube-controller-manager-old-k8s-version-106430" [7bd6d2ff-1cd3-48cd-89f6-2b3c68fda714] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:01.453486  406120 system_pods.go:61] "kube-proxy-zqs8f" [834c3d0a-03fc-480c-a4c6-9f010159b1f9] Running
	I1101 09:43:01.453494  406120 system_pods.go:61] "kube-scheduler-old-k8s-version-106430" [8dd03d37-0e38-42c2-8c96-795cf8cf7d73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:01.453503  406120 system_pods.go:61] "storage-provisioner" [b8fde0f9-bc13-41ca-9adc-2b0edc592938] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:01.453512  406120 system_pods.go:74] duration metric: took 4.489662ms to wait for pod list to return data ...
	I1101 09:43:01.453524  406120 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:43:01.455675  406120 default_sa.go:45] found service account: "default"
	I1101 09:43:01.455701  406120 default_sa.go:55] duration metric: took 2.169679ms for default service account to be created ...
	I1101 09:43:01.455712  406120 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:43:01.460873  406120 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:01.460972  406120 system_pods.go:89] "coredns-5dd5756b68-xh2rf" [2dc48063-a93a-46c9-b6da-451a12b954c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:01.460992  406120 system_pods.go:89] "etcd-old-k8s-version-106430" [6f7386a3-1337-464f-a414-cd3c59f37e83] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:01.461000  406120 system_pods.go:89] "kindnet-5v6hn" [68338c9c-3108-4c9f-8fed-214858c90ef5] Running
	I1101 09:43:01.461020  406120 system_pods.go:89] "kube-apiserver-old-k8s-version-106430" [c2554645-936b-4a63-8090-580b3bef9961] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:01.461032  406120 system_pods.go:89] "kube-controller-manager-old-k8s-version-106430" [7bd6d2ff-1cd3-48cd-89f6-2b3c68fda714] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:01.461041  406120 system_pods.go:89] "kube-proxy-zqs8f" [834c3d0a-03fc-480c-a4c6-9f010159b1f9] Running
	I1101 09:43:01.461050  406120 system_pods.go:89] "kube-scheduler-old-k8s-version-106430" [8dd03d37-0e38-42c2-8c96-795cf8cf7d73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:01.461057  406120 system_pods.go:89] "storage-provisioner" [b8fde0f9-bc13-41ca-9adc-2b0edc592938] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:01.461073  406120 system_pods.go:126] duration metric: took 5.35372ms to wait for k8s-apps to be running ...
	I1101 09:43:01.461084  406120 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:43:01.461171  406120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:01.478042  406120 system_svc.go:56] duration metric: took 16.947484ms WaitForService to wait for kubelet
	I1101 09:43:01.478073  406120 kubeadm.go:587] duration metric: took 3.458630412s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:01.478102  406120 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:43:01.481243  406120 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:43:01.481271  406120 node_conditions.go:123] node cpu capacity is 8
	I1101 09:43:01.481287  406120 node_conditions.go:105] duration metric: took 3.179335ms to run NodePressure ...
	I1101 09:43:01.481305  406120 start.go:242] waiting for startup goroutines ...
	I1101 09:43:01.481315  406120 start.go:247] waiting for cluster config update ...
	I1101 09:43:01.481335  406120 start.go:256] writing updated cluster config ...
	I1101 09:43:01.481614  406120 ssh_runner.go:195] Run: rm -f paused
	I1101 09:43:01.486105  406120 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:01.491368  406120 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xh2rf" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:43:03.497602  406120 pod_ready.go:104] pod "coredns-5dd5756b68-xh2rf" is not "Ready", error: <nil>
	W1101 09:43:03.154362  400655 node_ready.go:57] node "default-k8s-diff-port-927869" has "Ready":"False" status (will retry)
	I1101 09:43:05.154723  400655 node_ready.go:49] node "default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:05.154755  400655 node_ready.go:38] duration metric: took 11.003549181s for node "default-k8s-diff-port-927869" to be "Ready" ...
	I1101 09:43:05.154769  400655 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:43:05.154817  400655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:43:05.167386  400655 api_server.go:72] duration metric: took 11.342569622s to wait for apiserver process to appear ...
	I1101 09:43:05.167411  400655 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:43:05.167431  400655 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 09:43:05.171809  400655 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 09:43:05.172865  400655 api_server.go:141] control plane version: v1.34.1
	I1101 09:43:05.172890  400655 api_server.go:131] duration metric: took 5.472974ms to wait for apiserver health ...
	I1101 09:43:05.172899  400655 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:43:05.176330  400655 system_pods.go:59] 8 kube-system pods found
	I1101 09:43:05.176405  400655 system_pods.go:61] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:05.176414  400655 system_pods.go:61] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running
	I1101 09:43:05.176422  400655 system_pods.go:61] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running
	I1101 09:43:05.176427  400655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running
	I1101 09:43:05.176433  400655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running
	I1101 09:43:05.176439  400655 system_pods.go:61] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running
	I1101 09:43:05.176447  400655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running
	I1101 09:43:05.176455  400655 system_pods.go:61] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:05.176463  400655 system_pods.go:74] duration metric: took 3.558504ms to wait for pod list to return data ...
	I1101 09:43:05.176475  400655 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:43:05.181273  400655 default_sa.go:45] found service account: "default"
	I1101 09:43:05.181302  400655 default_sa.go:55] duration metric: took 4.820177ms for default service account to be created ...
	I1101 09:43:05.181312  400655 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:43:05.184340  400655 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:05.184370  400655 system_pods.go:89] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:05.184375  400655 system_pods.go:89] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running
	I1101 09:43:05.184381  400655 system_pods.go:89] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running
	I1101 09:43:05.184384  400655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running
	I1101 09:43:05.184388  400655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running
	I1101 09:43:05.184392  400655 system_pods.go:89] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running
	I1101 09:43:05.184395  400655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running
	I1101 09:43:05.184400  400655 system_pods.go:89] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:05.184422  400655 retry.go:31] will retry after 237.080714ms: missing components: kube-dns
	I1101 09:43:05.426136  400655 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:05.426172  400655 system_pods.go:89] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:05.426178  400655 system_pods.go:89] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running
	I1101 09:43:05.426184  400655 system_pods.go:89] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running
	I1101 09:43:05.426188  400655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running
	I1101 09:43:05.426191  400655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running
	I1101 09:43:05.426195  400655 system_pods.go:89] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running
	I1101 09:43:05.426198  400655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running
	I1101 09:43:05.426204  400655 system_pods.go:89] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:05.426223  400655 retry.go:31] will retry after 237.118658ms: missing components: kube-dns
	I1101 09:43:05.666570  400655 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:05.666625  400655 system_pods.go:89] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:05.666633  400655 system_pods.go:89] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running
	I1101 09:43:05.666642  400655 system_pods.go:89] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running
	I1101 09:43:05.666648  400655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running
	I1101 09:43:05.666653  400655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running
	I1101 09:43:05.666658  400655 system_pods.go:89] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running
	I1101 09:43:05.666663  400655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running
	I1101 09:43:05.666672  400655 system_pods.go:89] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:05.666691  400655 retry.go:31] will retry after 395.981375ms: missing components: kube-dns
	I1101 09:43:06.068028  400655 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:06.068086  400655 system_pods.go:89] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Running
	I1101 09:43:06.068096  400655 system_pods.go:89] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running
	I1101 09:43:06.068102  400655 system_pods.go:89] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running
	I1101 09:43:06.068110  400655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running
	I1101 09:43:06.068116  400655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running
	I1101 09:43:06.068121  400655 system_pods.go:89] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running
	I1101 09:43:06.068127  400655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running
	I1101 09:43:06.068139  400655 system_pods.go:89] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Running
	I1101 09:43:06.068149  400655 system_pods.go:126] duration metric: took 886.830082ms to wait for k8s-apps to be running ...
	I1101 09:43:06.068163  400655 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:43:06.068224  400655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:06.083264  400655 system_svc.go:56] duration metric: took 15.091461ms WaitForService to wait for kubelet
	I1101 09:43:06.083307  400655 kubeadm.go:587] duration metric: took 12.25849379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:06.083332  400655 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:43:06.086982  400655 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:43:06.087019  400655 node_conditions.go:123] node cpu capacity is 8
	I1101 09:43:06.087036  400655 node_conditions.go:105] duration metric: took 3.698423ms to run NodePressure ...
	I1101 09:43:06.087054  400655 start.go:242] waiting for startup goroutines ...
	I1101 09:43:06.087064  400655 start.go:247] waiting for cluster config update ...
	I1101 09:43:06.087077  400655 start.go:256] writing updated cluster config ...
	I1101 09:43:06.087420  400655 ssh_runner.go:195] Run: rm -f paused
	I1101 09:43:06.092662  400655 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:06.097988  400655 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mlk9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.103499  400655 pod_ready.go:94] pod "coredns-66bc5c9577-mlk9t" is "Ready"
	I1101 09:43:06.103537  400655 pod_ready.go:86] duration metric: took 5.514026ms for pod "coredns-66bc5c9577-mlk9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.106066  400655 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.110682  400655 pod_ready.go:94] pod "etcd-default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:06.110711  400655 pod_ready.go:86] duration metric: took 4.616826ms for pod "etcd-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.113186  400655 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.117832  400655 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:06.117855  400655 pod_ready.go:86] duration metric: took 4.643427ms for pod "kube-apiserver-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.120106  400655 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.498078  400655 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:06.498104  400655 pod_ready.go:86] duration metric: took 377.974782ms for pod "kube-controller-manager-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.699099  400655 pod_ready.go:83] waiting for pod "kube-proxy-dszvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:07.098234  400655 pod_ready.go:94] pod "kube-proxy-dszvg" is "Ready"
	I1101 09:43:07.098264  400655 pod_ready.go:86] duration metric: took 399.13582ms for pod "kube-proxy-dszvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:07.298565  400655 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:07.697890  400655 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:07.697963  400655 pod_ready.go:86] duration metric: took 399.372786ms for pod "kube-scheduler-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:07.697978  400655 pod_ready.go:40] duration metric: took 1.605281665s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:07.748049  400655 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:43:07.750223  400655 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-927869" cluster and "default" namespace by default
	W1101 09:43:05.996507  406120 pod_ready.go:104] pod "coredns-5dd5756b68-xh2rf" is not "Ready", error: <nil>
	W1101 09:43:07.997891  406120 pod_ready.go:104] pod "coredns-5dd5756b68-xh2rf" is not "Ready", error: <nil>
	W1101 09:43:10.497973  406120 pod_ready.go:104] pod "coredns-5dd5756b68-xh2rf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:43:01 embed-certs-214580 crio[774]: time="2025-11-01T09:43:01.71110151Z" level=info msg="Starting container: 465b615b60e9eae65fd3e267fd6e70577e53be2f2da9ec554715d82d0b5377b0" id=5a0269cd-82e3-40a8-9b6e-97687624e35c name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:01 embed-certs-214580 crio[774]: time="2025-11-01T09:43:01.713654204Z" level=info msg="Started container" PID=1839 containerID=465b615b60e9eae65fd3e267fd6e70577e53be2f2da9ec554715d82d0b5377b0 description=kube-system/coredns-66bc5c9577-cmnj8/coredns id=5a0269cd-82e3-40a8-9b6e-97687624e35c name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c6c4ec98d32dc26c77ef4909613d89ff592640ed75d22e1bd06568519162f09
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.12189673Z" level=info msg="Running pod sandbox: default/busybox/POD" id=bbaa4055-08a7-4c3b-8cfd-a0f092e243b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.122040484Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.126905719Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3052a638f8fd4bddecd42ffbba264603bc6cf284d92c2be25cea66c48c992478 UID:b5303634-8aad-428d-8ab1-7ac3875ed855 NetNS:/var/run/netns/60d3b263-5354-4b00-8ae0-0192c600be35 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00091c5a0}] Aliases:map[]}"
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.126962388Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.136989313Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3052a638f8fd4bddecd42ffbba264603bc6cf284d92c2be25cea66c48c992478 UID:b5303634-8aad-428d-8ab1-7ac3875ed855 NetNS:/var/run/netns/60d3b263-5354-4b00-8ae0-0192c600be35 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00091c5a0}] Aliases:map[]}"
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.137124358Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.137866076Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.138671765Z" level=info msg="Ran pod sandbox 3052a638f8fd4bddecd42ffbba264603bc6cf284d92c2be25cea66c48c992478 with infra container: default/busybox/POD" id=bbaa4055-08a7-4c3b-8cfd-a0f092e243b8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.140216371Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=46a76337-887e-4218-9f07-af6c19833805 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.140352785Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=46a76337-887e-4218-9f07-af6c19833805 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.140387898Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=46a76337-887e-4218-9f07-af6c19833805 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.141245097Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c560902d-828a-4b2d-8495-0171ee29e091 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:43:04 embed-certs-214580 crio[774]: time="2025-11-01T09:43:04.143847611Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:43:06 embed-certs-214580 crio[774]: time="2025-11-01T09:43:06.173292846Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c560902d-828a-4b2d-8495-0171ee29e091 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:43:06 embed-certs-214580 crio[774]: time="2025-11-01T09:43:06.174163164Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cc82e281-555f-433b-837e-bbcbbfca9eaa name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:06 embed-certs-214580 crio[774]: time="2025-11-01T09:43:06.176055913Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b8ae8016-01b0-42b1-8b1a-de9181ae6043 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:06 embed-certs-214580 crio[774]: time="2025-11-01T09:43:06.182140368Z" level=info msg="Creating container: default/busybox/busybox" id=8434014b-b7a7-48b2-b2b9-6b5575fcf102 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:06 embed-certs-214580 crio[774]: time="2025-11-01T09:43:06.182286399Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:06 embed-certs-214580 crio[774]: time="2025-11-01T09:43:06.186279866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:06 embed-certs-214580 crio[774]: time="2025-11-01T09:43:06.18683998Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:06 embed-certs-214580 crio[774]: time="2025-11-01T09:43:06.216590294Z" level=info msg="Created container bf7d87659a365ae262cceb60c1ef4feb2fc516cd4360ad57d22a1b36fedf38d7: default/busybox/busybox" id=8434014b-b7a7-48b2-b2b9-6b5575fcf102 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:06 embed-certs-214580 crio[774]: time="2025-11-01T09:43:06.217420685Z" level=info msg="Starting container: bf7d87659a365ae262cceb60c1ef4feb2fc516cd4360ad57d22a1b36fedf38d7" id=fe73e16f-62d4-4a1f-9df8-f8f45754c497 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:06 embed-certs-214580 crio[774]: time="2025-11-01T09:43:06.219279777Z" level=info msg="Started container" PID=1908 containerID=bf7d87659a365ae262cceb60c1ef4feb2fc516cd4360ad57d22a1b36fedf38d7 description=default/busybox/busybox id=fe73e16f-62d4-4a1f-9df8-f8f45754c497 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3052a638f8fd4bddecd42ffbba264603bc6cf284d92c2be25cea66c48c992478
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	bf7d87659a365       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   3052a638f8fd4       busybox                                      default
	465b615b60e9e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   5c6c4ec98d32d       coredns-66bc5c9577-cmnj8                     kube-system
	420cc57affe07       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   b7d35e2cdf75e       storage-provisioner                          kube-system
	391e8b2ecd623       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      24 seconds ago      Running             kindnet-cni               0                   714fc4d4b1c0a       kindnet-v28lz                                kube-system
	b586b6eb5030f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   4b33b1b56a79f       kube-proxy-49j45                             kube-system
	f1f1f70298aa3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   11cef5c67d94c       kube-scheduler-embed-certs-214580            kube-system
	959a0a56cd839       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   737cc7223051e       kube-controller-manager-embed-certs-214580   kube-system
	e261a220a8ccb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   47d254c46e388       etcd-embed-certs-214580                      kube-system
	6352167d0b313       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   8774ff78e603a       kube-apiserver-embed-certs-214580            kube-system
	
	
	==> coredns [465b615b60e9eae65fd3e267fd6e70577e53be2f2da9ec554715d82d0b5377b0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38294 - 5247 "HINFO IN 9069797552166602556.5973232530235493677. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012014891s
	
	
	==> describe nodes <==
	Name:               embed-certs-214580
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-214580
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=embed-certs-214580
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_42_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:42:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-214580
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:43:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:43:01 +0000   Sat, 01 Nov 2025 09:42:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:43:01 +0000   Sat, 01 Nov 2025 09:42:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:43:01 +0000   Sat, 01 Nov 2025 09:42:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:43:01 +0000   Sat, 01 Nov 2025 09:43:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-214580
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d2ac0cbf-eedb-40ea-a447-534bb7a6586c
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-cmnj8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-214580                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-v28lz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-214580             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-embed-certs-214580    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-49j45                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-214580             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node embed-certs-214580 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node embed-certs-214580 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node embed-certs-214580 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node embed-certs-214580 event: Registered Node embed-certs-214580 in Controller
	  Normal  NodeReady                14s   kubelet          Node embed-certs-214580 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [e261a220a8ccb636781206260c135511381e1c5a0860e5be5bda8a2b8ed673d8] <==
	{"level":"warn","ts":"2025-11-01T09:42:41.309725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.317376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.327358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.334846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.342210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.349824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.357039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.363862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.370154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.376810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.394136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.400949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.407734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.415596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.424422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.432562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.441054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.449032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.457017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.464065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.471604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.479650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.491748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.499006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:41.506123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33496","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:43:15 up  1:25,  0 user,  load average: 4.62, 4.44, 2.85
	Linux embed-certs-214580 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [391e8b2ecd623d2a2278ff9e6b8fb3bc0174f61ed4e52ce0eb4d9fc8ae23d4f0] <==
	I1101 09:42:50.708349       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:42:50.727486       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 09:42:50.727783       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:42:50.727806       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:42:50.727837       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:42:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1101 09:42:51.007535       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1101 09:42:51.007560       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1101 09:42:51.107218       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:42:51.126648       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:42:51.126709       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:42:51.126965       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1101 09:42:51.202494       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1101 09:42:52.227904       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:42:52.227970       1 metrics.go:72] Registering metrics
	I1101 09:42:52.228105       1 controller.go:711] "Syncing nftables rules"
	I1101 09:43:01.007161       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:43:01.007265       1 main.go:301] handling current node
	I1101 09:43:11.008020       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:43:11.008069       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6352167d0b3138414fd496f94810ff448148dc7cf6bf45cf7778ae3a13bf57e5] <==
	E1101 09:42:42.119440       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1101 09:42:42.120648       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:42:42.120654       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:42:42.125007       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:42:42.125223       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:42:42.167102       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:42:42.247487       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:42:42.970391       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:42:42.974433       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:42:42.974450       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:42:43.540480       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:42:43.582126       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:42:43.676727       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:42:43.684025       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1101 09:42:43.685350       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:42:43.690908       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:42:44.017441       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:42:44.866379       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:42:44.889100       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:42:44.901867       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:42:49.822216       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:42:49.828158       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:42:50.066697       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:42:50.115648       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 09:43:13.908208       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:33128: use of closed network connection
	
	
	==> kube-controller-manager [959a0a56cd8399c34986c25fa23158d54b2159abc9f573326db98002d6fe83f9] <==
	I1101 09:42:49.012556       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:42:49.012960       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:42:49.012970       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:42:49.013098       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:42:49.013441       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:42:49.013900       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:42:49.014026       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:42:49.014032       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:42:49.014048       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:42:49.014113       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:42:49.014155       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:42:49.014223       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:42:49.014373       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:42:49.014390       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:42:49.015002       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:42:49.015032       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:42:49.020282       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:42:49.021876       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:42:49.021877       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:42:49.022123       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:42:49.022731       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-214580"
	I1101 09:42:49.022800       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:42:49.031555       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:42:49.035771       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:43:04.025496       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b586b6eb5030fd90e382770207b18f463ef70637e76a82b08f26bff97f896143] <==
	I1101 09:42:50.595500       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:42:50.678845       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:42:50.779660       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:42:50.779707       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 09:42:50.779839       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:42:50.865848       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:42:50.865924       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:42:50.888781       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:42:50.889318       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:42:50.889381       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:42:50.891493       1 config.go:309] "Starting node config controller"
	I1101 09:42:50.892020       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:42:50.892115       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:42:50.891640       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:42:50.892201       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:42:50.891594       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:42:50.892361       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:42:50.891622       1 config.go:200] "Starting service config controller"
	I1101 09:42:50.892458       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:42:50.993597       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:42:50.993604       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:42:50.993722       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f1f1f70298aa3be762d8ec73f5f2f45512a4cbc16445955008572a80514250bc] <==
	E1101 09:42:42.027558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:42:42.027650       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:42:42.027724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:42:42.027890       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:42:42.032387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:42:42.032460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:42:42.032498       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:42:42.032568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:42:42.032634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:42:42.032726       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:42:42.032854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:42:42.032987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:42:42.035516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:42:42.884548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:42:42.896173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:42:42.934964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:42:43.024041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:42:43.041150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:42:43.109739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:42:43.132440       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:42:43.177969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:42:43.265583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:42:43.269183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:42:43.361807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1101 09:42:45.420375       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:42:45 embed-certs-214580 kubelet[1303]: E1101 09:42:45.761902    1303 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-embed-certs-214580\" already exists" pod="kube-system/kube-scheduler-embed-certs-214580"
	Nov 01 09:42:45 embed-certs-214580 kubelet[1303]: I1101 09:42:45.761954    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-214580" podStartSLOduration=1.7619332189999999 podStartE2EDuration="1.761933219s" podCreationTimestamp="2025-11-01 09:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:45.750114981 +0000 UTC m=+1.117129990" watchObservedRunningTime="2025-11-01 09:42:45.761933219 +0000 UTC m=+1.128948219"
	Nov 01 09:42:45 embed-certs-214580 kubelet[1303]: I1101 09:42:45.762068    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-214580" podStartSLOduration=1.7620628950000001 podStartE2EDuration="1.762062895s" podCreationTimestamp="2025-11-01 09:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:45.761816542 +0000 UTC m=+1.128831540" watchObservedRunningTime="2025-11-01 09:42:45.762062895 +0000 UTC m=+1.129077912"
	Nov 01 09:42:45 embed-certs-214580 kubelet[1303]: I1101 09:42:45.785735    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-214580" podStartSLOduration=1.785704503 podStartE2EDuration="1.785704503s" podCreationTimestamp="2025-11-01 09:42:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:45.772292494 +0000 UTC m=+1.139307516" watchObservedRunningTime="2025-11-01 09:42:45.785704503 +0000 UTC m=+1.152719506"
	Nov 01 09:42:49 embed-certs-214580 kubelet[1303]: I1101 09:42:49.082092    1303 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:42:49 embed-certs-214580 kubelet[1303]: I1101 09:42:49.082876    1303 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:42:50 embed-certs-214580 kubelet[1303]: I1101 09:42:50.244638    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d68725c8-8c77-4a60-801e-59385a165589-lib-modules\") pod \"kindnet-v28lz\" (UID: \"d68725c8-8c77-4a60-801e-59385a165589\") " pod="kube-system/kindnet-v28lz"
	Nov 01 09:42:50 embed-certs-214580 kubelet[1303]: I1101 09:42:50.244730    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhfln\" (UniqueName: \"kubernetes.io/projected/d68725c8-8c77-4a60-801e-59385a165589-kube-api-access-vhfln\") pod \"kindnet-v28lz\" (UID: \"d68725c8-8c77-4a60-801e-59385a165589\") " pod="kube-system/kindnet-v28lz"
	Nov 01 09:42:50 embed-certs-214580 kubelet[1303]: I1101 09:42:50.244789    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/234d7bd6-5336-4ec0-8d37-9e59105a6166-kube-proxy\") pod \"kube-proxy-49j45\" (UID: \"234d7bd6-5336-4ec0-8d37-9e59105a6166\") " pod="kube-system/kube-proxy-49j45"
	Nov 01 09:42:50 embed-certs-214580 kubelet[1303]: I1101 09:42:50.244814    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/234d7bd6-5336-4ec0-8d37-9e59105a6166-xtables-lock\") pod \"kube-proxy-49j45\" (UID: \"234d7bd6-5336-4ec0-8d37-9e59105a6166\") " pod="kube-system/kube-proxy-49j45"
	Nov 01 09:42:50 embed-certs-214580 kubelet[1303]: I1101 09:42:50.244833    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/234d7bd6-5336-4ec0-8d37-9e59105a6166-lib-modules\") pod \"kube-proxy-49j45\" (UID: \"234d7bd6-5336-4ec0-8d37-9e59105a6166\") " pod="kube-system/kube-proxy-49j45"
	Nov 01 09:42:50 embed-certs-214580 kubelet[1303]: I1101 09:42:50.244892    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2jwb\" (UniqueName: \"kubernetes.io/projected/234d7bd6-5336-4ec0-8d37-9e59105a6166-kube-api-access-t2jwb\") pod \"kube-proxy-49j45\" (UID: \"234d7bd6-5336-4ec0-8d37-9e59105a6166\") " pod="kube-system/kube-proxy-49j45"
	Nov 01 09:42:50 embed-certs-214580 kubelet[1303]: I1101 09:42:50.244963    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d68725c8-8c77-4a60-801e-59385a165589-xtables-lock\") pod \"kindnet-v28lz\" (UID: \"d68725c8-8c77-4a60-801e-59385a165589\") " pod="kube-system/kindnet-v28lz"
	Nov 01 09:42:50 embed-certs-214580 kubelet[1303]: I1101 09:42:50.244999    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d68725c8-8c77-4a60-801e-59385a165589-cni-cfg\") pod \"kindnet-v28lz\" (UID: \"d68725c8-8c77-4a60-801e-59385a165589\") " pod="kube-system/kindnet-v28lz"
	Nov 01 09:42:50 embed-certs-214580 kubelet[1303]: I1101 09:42:50.817895    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-v28lz" podStartSLOduration=0.817867212 podStartE2EDuration="817.867212ms" podCreationTimestamp="2025-11-01 09:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:50.790824329 +0000 UTC m=+6.157839330" watchObservedRunningTime="2025-11-01 09:42:50.817867212 +0000 UTC m=+6.184882214"
	Nov 01 09:42:52 embed-certs-214580 kubelet[1303]: I1101 09:42:52.959894    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-49j45" podStartSLOduration=2.959870227 podStartE2EDuration="2.959870227s" podCreationTimestamp="2025-11-01 09:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:50.819053445 +0000 UTC m=+6.186068464" watchObservedRunningTime="2025-11-01 09:42:52.959870227 +0000 UTC m=+8.326885225"
	Nov 01 09:43:01 embed-certs-214580 kubelet[1303]: I1101 09:43:01.287844    1303 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:43:01 embed-certs-214580 kubelet[1303]: I1101 09:43:01.423891    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4zfb\" (UniqueName: \"kubernetes.io/projected/add6352a-7e5a-405a-96bb-cd63b7f4eb6a-kube-api-access-c4zfb\") pod \"storage-provisioner\" (UID: \"add6352a-7e5a-405a-96bb-cd63b7f4eb6a\") " pod="kube-system/storage-provisioner"
	Nov 01 09:43:01 embed-certs-214580 kubelet[1303]: I1101 09:43:01.424014    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7de64ad2-dad1-4aa9-aff7-af9733684465-config-volume\") pod \"coredns-66bc5c9577-cmnj8\" (UID: \"7de64ad2-dad1-4aa9-aff7-af9733684465\") " pod="kube-system/coredns-66bc5c9577-cmnj8"
	Nov 01 09:43:01 embed-certs-214580 kubelet[1303]: I1101 09:43:01.424044    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6wnc\" (UniqueName: \"kubernetes.io/projected/7de64ad2-dad1-4aa9-aff7-af9733684465-kube-api-access-h6wnc\") pod \"coredns-66bc5c9577-cmnj8\" (UID: \"7de64ad2-dad1-4aa9-aff7-af9733684465\") " pod="kube-system/coredns-66bc5c9577-cmnj8"
	Nov 01 09:43:01 embed-certs-214580 kubelet[1303]: I1101 09:43:01.424077    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/add6352a-7e5a-405a-96bb-cd63b7f4eb6a-tmp\") pod \"storage-provisioner\" (UID: \"add6352a-7e5a-405a-96bb-cd63b7f4eb6a\") " pod="kube-system/storage-provisioner"
	Nov 01 09:43:01 embed-certs-214580 kubelet[1303]: I1101 09:43:01.828023    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=10.828000449 podStartE2EDuration="10.828000449s" podCreationTimestamp="2025-11-01 09:42:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:43:01.81484747 +0000 UTC m=+17.181862474" watchObservedRunningTime="2025-11-01 09:43:01.828000449 +0000 UTC m=+17.195015450"
	Nov 01 09:43:03 embed-certs-214580 kubelet[1303]: I1101 09:43:03.814474    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cmnj8" podStartSLOduration=13.81444509 podStartE2EDuration="13.81444509s" podCreationTimestamp="2025-11-01 09:42:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:43:01.828236223 +0000 UTC m=+17.195251214" watchObservedRunningTime="2025-11-01 09:43:03.81444509 +0000 UTC m=+19.181460091"
	Nov 01 09:43:03 embed-certs-214580 kubelet[1303]: I1101 09:43:03.841004    1303 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58wb2\" (UniqueName: \"kubernetes.io/projected/b5303634-8aad-428d-8ab1-7ac3875ed855-kube-api-access-58wb2\") pod \"busybox\" (UID: \"b5303634-8aad-428d-8ab1-7ac3875ed855\") " pod="default/busybox"
	Nov 01 09:43:06 embed-certs-214580 kubelet[1303]: I1101 09:43:06.829385    1303 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.7950006630000002 podStartE2EDuration="3.829361673s" podCreationTimestamp="2025-11-01 09:43:03 +0000 UTC" firstStartedPulling="2025-11-01 09:43:04.140686705 +0000 UTC m=+19.507701687" lastFinishedPulling="2025-11-01 09:43:06.175047702 +0000 UTC m=+21.542062697" observedRunningTime="2025-11-01 09:43:06.829180016 +0000 UTC m=+22.196195040" watchObservedRunningTime="2025-11-01 09:43:06.829361673 +0000 UTC m=+22.196376676"
	
	
	==> storage-provisioner [420cc57affe07b3bc15b2995e0a0360c4657400b6f24c533ff7d0b0916da51e6] <==
	I1101 09:43:01.692832       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:43:01.701279       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:43:01.701336       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:43:01.704601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:01.712164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:43:01.712356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:43:01.712520       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-214580_205af1e4-e7c3-48d7-a977-d1ccd5632680!
	I1101 09:43:01.712508       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af0079c7-90aa-4baa-b4dc-fd21bc09ce5f", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-214580_205af1e4-e7c3-48d7-a977-d1ccd5632680 became leader
	W1101 09:43:01.714998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:01.719754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:43:01.813565       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-214580_205af1e4-e7c3-48d7-a977-d1ccd5632680!
	W1101 09:43:03.723210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:03.728351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:05.731430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:05.737030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:07.741364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:07.746322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:09.749307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:09.755156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:11.758419       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:11.763500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:13.766977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:13.772998       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-214580 -n embed-certs-214580
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-214580 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-927869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-927869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (332.393427ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:43:16Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-927869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-927869 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-927869 describe deploy/metrics-server -n kube-system: exit status 1 (81.725662ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-927869 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-927869
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-927869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c",
	        "Created": "2025-11-01T09:42:32.791979804Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 402314,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:42:32.839701652Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/hostname",
	        "HostsPath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/hosts",
	        "LogPath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c-json.log",
	        "Name": "/default-k8s-diff-port-927869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-927869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-927869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c",
	                "LowerDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-927869",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-927869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-927869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-927869",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-927869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6eaae58a2a82c3663a667797ecd9a3553c02cf32b685214023b1c8e141441768",
	            "SandboxKey": "/var/run/docker/netns/6eaae58a2a82",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-927869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:4b:a9:33:57:05",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5df57938ba0e2329abf459496ea29ebdbd8c04ec3a35e78ed455192e01829fff",
	                    "EndpointID": "b06649de30a2a2c106d62004d78c6e15756fec11affa2b9665af4e3cd4449322",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-927869",
	                        "08e9b30a8fc0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-927869 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-927869 logs -n 25: (1.3480661s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                        │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                  │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cri-dockerd --version                                                                                                                                                                                           │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat containerd --no-pager                                                                                                                                                                             │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                      │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo cat /etc/containerd/config.toml                                                                                                                                                                                 │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo containerd config dump                                                                                                                                                                                          │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo crio config                                                                                                                                                                                                     │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p custom-flannel-307390                                                                                                                                                                                                                      │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p disable-driver-mounts-309397                                                                                                                                                                                                               │ disable-driver-mounts-309397 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-106430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p old-k8s-version-106430 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-106430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p no-preload-224845 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ stop    │ -p embed-certs-214580 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:42:50
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:42:50.614027  406120 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:42:50.614344  406120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:42:50.614355  406120 out.go:374] Setting ErrFile to fd 2...
	I1101 09:42:50.614360  406120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:42:50.614709  406120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:42:50.615372  406120 out.go:368] Setting JSON to false
	I1101 09:42:50.616698  406120 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5109,"bootTime":1761985062,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:42:50.616802  406120 start.go:143] virtualization: kvm guest
	I1101 09:42:50.619836  406120 out.go:179] * [old-k8s-version-106430] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:42:50.621872  406120 notify.go:221] Checking for updates...
	I1101 09:42:50.621888  406120 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:42:50.628674  406120 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:42:50.630225  406120 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:50.631472  406120 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:42:50.632961  406120 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:42:50.634435  406120 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:42:50.638045  406120 config.go:182] Loaded profile config "old-k8s-version-106430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:42:50.641595  406120 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1101 09:42:50.642850  406120 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:42:50.683326  406120 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:42:50.683460  406120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:42:50.783005  406120 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:42:50.759487501 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:42:50.783150  406120 docker.go:319] overlay module found
	I1101 09:42:50.787488  406120 out.go:179] * Using the docker driver based on existing profile
	I1101 09:42:50.788572  406120 start.go:309] selected driver: docker
	I1101 09:42:50.788595  406120 start.go:930] validating driver "docker" against &{Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:42:50.788779  406120 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:42:50.789528  406120 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:42:50.912241  406120 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-01 09:42:50.89413684 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:42:50.912615  406120 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:42:50.912665  406120 cni.go:84] Creating CNI manager for ""
	I1101 09:42:50.912718  406120 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:42:50.912765  406120 start.go:353] cluster config:
	{Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetr
ics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:42:50.914622  406120 out.go:179] * Starting "old-k8s-version-106430" primary control-plane node in "old-k8s-version-106430" cluster
	I1101 09:42:50.915954  406120 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:42:50.917379  406120 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:42:50.918514  406120 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:42:50.918573  406120 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 09:42:50.918590  406120 cache.go:59] Caching tarball of preloaded images
	I1101 09:42:50.918663  406120 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:42:50.918693  406120 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:42:50.918906  406120 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1101 09:42:50.919196  406120 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/config.json ...
	I1101 09:42:50.948106  406120 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:42:50.948138  406120 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:42:50.948154  406120 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:42:50.948189  406120 start.go:360] acquireMachinesLock for old-k8s-version-106430: {Name:mk47cab1e1fd681dae6862a843f54c2590f288ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:42:50.948282  406120 start.go:364] duration metric: took 39.062µs to acquireMachinesLock for "old-k8s-version-106430"
	I1101 09:42:50.948308  406120 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:42:50.948318  406120 fix.go:54] fixHost starting: 
	I1101 09:42:50.948612  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:50.972291  406120 fix.go:112] recreateIfNeeded on old-k8s-version-106430: state=Stopped err=<nil>
	W1101 09:42:50.972324  406120 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:42:50.531203  396593 addons.go:239] Setting addon default-storageclass=true in "embed-certs-214580"
	I1101 09:42:50.531229  396593 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:50.531249  396593 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:42:50.531260  396593 host.go:66] Checking if "embed-certs-214580" exists ...
	I1101 09:42:50.531310  396593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:42:50.533063  396593 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:42:50.568680  396593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:42:50.568816  396593 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:50.568869  396593 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:42:50.568974  396593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:42:50.601158  396593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:42:50.613732  396593 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:42:50.676436  396593 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:42:50.697529  396593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:50.737333  396593 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:50.882009  396593 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1101 09:42:50.886149  396593 node_ready.go:35] waiting up to 6m0s for node "embed-certs-214580" to be "Ready" ...
	I1101 09:42:51.161964  396593 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:42:51.163280  396593 addons.go:515] duration metric: took 670.456657ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:42:51.388894  396593 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-214580" context rescaled to 1 replicas
	I1101 09:42:49.376252  400655 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:42:49.383040  400655 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:42:49.383063  400655 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:42:49.398858  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:42:49.653613  400655 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:42:49.653808  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-927869 minikube.k8s.io/updated_at=2025_11_01T09_42_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=default-k8s-diff-port-927869 minikube.k8s.io/primary=true
	I1101 09:42:49.653892  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:49.665596  400655 ops.go:34] apiserver oom_adj: -16
	I1101 09:42:49.749820  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:50.250114  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:50.750585  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:51.250809  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:51.750006  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:52.250183  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:52.750028  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:53.250186  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:53.749999  400655 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:42:53.822757  400655 kubeadm.go:1114] duration metric: took 4.168926091s to wait for elevateKubeSystemPrivileges
	I1101 09:42:53.822793  400655 kubeadm.go:403] duration metric: took 15.047661715s to StartCluster
	I1101 09:42:53.822817  400655 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:53.822903  400655 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:53.824503  400655 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:53.824773  400655 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:42:53.824788  400655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:42:53.824818  400655 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:42:53.824999  400655 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-927869"
	I1101 09:42:53.825027  400655 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-927869"
	I1101 09:42:53.825039  400655 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-927869"
	I1101 09:42:53.825063  400655 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:42:53.825088  400655 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-927869"
	I1101 09:42:53.825051  400655 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:42:53.825501  400655 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:42:53.825647  400655 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:42:53.828120  400655 out.go:179] * Verifying Kubernetes components...
	I1101 09:42:53.829949  400655 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:53.849634  400655 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-927869"
	I1101 09:42:53.849672  400655 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:42:53.850090  400655 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:42:53.850960  400655 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:42:53.852691  400655 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:53.852716  400655 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:42:53.852783  400655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:42:53.882229  400655 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:53.882257  400655 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:42:53.882320  400655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:42:53.885053  400655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:42:53.907017  400655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:42:53.935846  400655 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:42:53.988391  400655 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:42:54.013317  400655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:54.045518  400655 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:54.149470  400655 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1101 09:42:54.151155  400655 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-927869" to be "Ready" ...
	I1101 09:42:54.374035  400655 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:42:50.974210  406120 out.go:252] * Restarting existing docker container for "old-k8s-version-106430" ...
	I1101 09:42:50.974286  406120 cli_runner.go:164] Run: docker start old-k8s-version-106430
	I1101 09:42:51.290157  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:51.314807  406120 kic.go:430] container "old-k8s-version-106430" state is running.
	I1101 09:42:51.315254  406120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-106430
	I1101 09:42:51.341531  406120 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/config.json ...
	I1101 09:42:51.341904  406120 machine.go:94] provisionDockerMachine start ...
	I1101 09:42:51.342010  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:51.365591  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:51.365960  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:51.365981  406120 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:42:51.366590  406120 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47468->127.0.0.1:33108: read: connection reset by peer
	I1101 09:42:54.518255  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-106430
	
	I1101 09:42:54.518288  406120 ubuntu.go:182] provisioning hostname "old-k8s-version-106430"
	I1101 09:42:54.518353  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:54.539831  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:54.540106  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:54.540129  406120 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-106430 && echo "old-k8s-version-106430" | sudo tee /etc/hostname
	I1101 09:42:54.702026  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-106430
	
	I1101 09:42:54.702114  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:54.724817  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:54.725136  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:54.725167  406120 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-106430' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-106430/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-106430' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:42:54.876787  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:42:54.876817  406120 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:42:54.876844  406120 ubuntu.go:190] setting up certificates
	I1101 09:42:54.876853  406120 provision.go:84] configureAuth start
	I1101 09:42:54.876906  406120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-106430
	I1101 09:42:54.896638  406120 provision.go:143] copyHostCerts
	I1101 09:42:54.896701  406120 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:42:54.896718  406120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:42:54.896786  406120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:42:54.896893  406120 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:42:54.896901  406120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:42:54.896956  406120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:42:54.897025  406120 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:42:54.897034  406120 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:42:54.897058  406120 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:42:54.897110  406120 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-106430 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-106430]
	I1101 09:42:54.980885  406120 provision.go:177] copyRemoteCerts
	I1101 09:42:54.980976  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:42:54.981016  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.002790  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.107988  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:42:55.129045  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1101 09:42:55.148507  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:42:55.168604  406120 provision.go:87] duration metric: took 291.735137ms to configureAuth
	I1101 09:42:55.168634  406120 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:42:55.168849  406120 config.go:182] Loaded profile config "old-k8s-version-106430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:42:55.169027  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.187704  406120 main.go:143] libmachine: Using SSH client type: native
	I1101 09:42:55.187966  406120 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I1101 09:42:55.187993  406120 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:42:55.497800  406120 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:42:55.497831  406120 machine.go:97] duration metric: took 4.155886646s to provisionDockerMachine
	I1101 09:42:55.497846  406120 start.go:293] postStartSetup for "old-k8s-version-106430" (driver="docker")
	I1101 09:42:55.497860  406120 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:42:55.497949  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:42:55.498013  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.519255  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.622520  406120 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:42:55.626564  406120 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:42:55.626626  406120 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:42:55.626647  406120 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:42:55.626715  406120 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:42:55.626812  406120 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:42:55.626948  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:42:55.635496  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:42:55.657654  406120 start.go:296] duration metric: took 159.790682ms for postStartSetup
	I1101 09:42:55.657758  406120 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:42:55.657821  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.676825  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.778028  406120 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:42:55.783383  406120 fix.go:56] duration metric: took 4.835054698s for fixHost
	I1101 09:42:55.783417  406120 start.go:83] releasing machines lock for "old-k8s-version-106430", held for 4.83512021s
	I1101 09:42:55.783495  406120 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-106430
	I1101 09:42:55.804416  406120 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:42:55.804456  406120 ssh_runner.go:195] Run: cat /version.json
	I1101 09:42:55.804492  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.804505  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:55.824353  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.824865  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:55.981055  406120 ssh_runner.go:195] Run: systemctl --version
	I1101 09:42:55.988383  406120 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:42:56.025779  406120 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:42:56.031204  406120 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:42:56.031292  406120 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:42:56.040425  406120 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:42:56.040454  406120 start.go:496] detecting cgroup driver to use...
	I1101 09:42:56.040493  406120 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:42:56.040550  406120 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:42:56.056165  406120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:42:56.071243  406120 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:42:56.071318  406120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:42:56.087584  406120 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:42:56.102101  406120 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:42:56.185386  406120 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:42:56.270398  406120 docker.go:234] disabling docker service ...
	I1101 09:42:56.270483  406120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:42:56.287689  406120 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:42:56.302743  406120 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:42:56.390775  406120 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:42:56.477451  406120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:42:56.490747  406120 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:42:56.507214  406120 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1101 09:42:56.507281  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.518382  406120 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:42:56.518457  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.527846  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.539349  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.548816  406120 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:42:56.557406  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.567380  406120 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.576904  406120 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:42:56.586527  406120 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:42:56.594509  406120 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:42:56.602525  406120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:56.689010  406120 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:42:56.807304  406120 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:42:56.807374  406120 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:42:56.811774  406120 start.go:564] Will wait 60s for crictl version
	I1101 09:42:56.811826  406120 ssh_runner.go:195] Run: which crictl
	I1101 09:42:56.815686  406120 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:42:56.841111  406120 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:42:56.841202  406120 ssh_runner.go:195] Run: crio --version
	I1101 09:42:56.870245  406120 ssh_runner.go:195] Run: crio --version
	I1101 09:42:56.903409  406120 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	W1101 09:42:52.889922  396593 node_ready.go:57] node "embed-certs-214580" has "Ready":"False" status (will retry)
	W1101 09:42:54.890128  396593 node_ready.go:57] node "embed-certs-214580" has "Ready":"False" status (will retry)
	W1101 09:42:57.390285  396593 node_ready.go:57] node "embed-certs-214580" has "Ready":"False" status (will retry)
	I1101 09:42:56.904675  406120 cli_runner.go:164] Run: docker network inspect old-k8s-version-106430 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:42:56.922956  406120 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 09:42:56.927507  406120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:42:56.938255  406120 kubeadm.go:884] updating cluster {Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:42:56.938367  406120 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 09:42:56.938406  406120 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:42:56.972069  406120 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:42:56.972094  406120 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:42:56.972148  406120 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:42:57.002691  406120 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:42:57.002716  406120 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:42:57.002725  406120 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1101 09:42:57.002856  406120 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-106430 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:42:57.002967  406120 ssh_runner.go:195] Run: crio config
	I1101 09:42:57.051562  406120 cni.go:84] Creating CNI manager for ""
	I1101 09:42:57.051580  406120 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:42:57.051594  406120 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:42:57.051624  406120 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-106430 NodeName:old-k8s-version-106430 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:42:57.051795  406120 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-106430"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:42:57.051865  406120 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1101 09:42:57.060477  406120 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:42:57.060538  406120 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:42:57.069511  406120 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1101 09:42:57.083613  406120 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:42:57.097812  406120 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1101 09:42:57.111580  406120 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:42:57.115488  406120 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:42:57.126011  406120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:57.213189  406120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:42:57.238996  406120 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430 for IP: 192.168.103.2
	I1101 09:42:57.239022  406120 certs.go:195] generating shared ca certs ...
	I1101 09:42:57.239045  406120 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:57.239236  406120 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:42:57.239286  406120 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:42:57.239299  406120 certs.go:257] generating profile certs ...
	I1101 09:42:57.239410  406120 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/client.key
	I1101 09:42:57.239470  406120 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/apiserver.key.08895b71
	I1101 09:42:57.239520  406120 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/proxy-client.key
	I1101 09:42:57.239670  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:42:57.239711  406120 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:42:57.239721  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:42:57.239755  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:42:57.239792  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:42:57.239816  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:42:57.239872  406120 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:42:57.240646  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:42:57.261275  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:42:57.280725  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:42:57.302620  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:42:57.324849  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1101 09:42:57.346130  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1101 09:42:57.364807  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:42:57.382889  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/old-k8s-version-106430/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:42:57.401595  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:42:57.420604  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:42:57.440611  406120 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:42:57.460759  406120 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:42:57.475075  406120 ssh_runner.go:195] Run: openssl version
	I1101 09:42:57.482420  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:42:57.491762  406120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:42:57.496929  406120 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:42:57.497002  406120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:42:57.536750  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:42:57.545765  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:42:57.554820  406120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:42:57.559339  406120 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:42:57.559405  406120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:42:57.598430  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:42:57.607527  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:42:57.616648  406120 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:42:57.620647  406120 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:42:57.620708  406120 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:42:57.659548  406120 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:42:57.668681  406120 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:42:57.672696  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:42:57.708768  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:42:57.747304  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:42:57.796024  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:42:57.836290  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:42:57.889574  406120 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:42:57.936314  406120 kubeadm.go:401] StartCluster: {Name:old-k8s-version-106430 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-106430 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:42:57.936438  406120 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:42:57.936498  406120 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:42:57.970374  406120 cri.go:89] found id: "67383aa07ea5a571b5780306e02b652d4100444e7d3375f13add5b076ff05a91"
	I1101 09:42:57.970398  406120 cri.go:89] found id: "21c9e16bfcb6f8965fbdbbf8b9f68b535b2252e3a9d58fe71811900f43d0178a"
	I1101 09:42:57.970403  406120 cri.go:89] found id: "227f629919dddfb2b5ef168af9cb9b28faa37ce01740e96b97f11cdff132e1a4"
	I1101 09:42:57.970408  406120 cri.go:89] found id: "2879f0fdda15ae5930efa2d324aedc5144c2f63543dc974f06fa3e3168b46588"
	I1101 09:42:57.970412  406120 cri.go:89] found id: ""
	I1101 09:42:57.970460  406120 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:42:57.984075  406120 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:42:57Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:42:57.984151  406120 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:42:57.993097  406120 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:42:57.993119  406120 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:42:57.993172  406120 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:42:58.001723  406120 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:42:58.003096  406120 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-106430" does not appear in /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:58.004036  406120 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-104443/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-106430" cluster setting kubeconfig missing "old-k8s-version-106430" context setting]
	I1101 09:42:58.005461  406120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:58.007975  406120 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:42:58.016714  406120 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1101 09:42:58.016755  406120 kubeadm.go:602] duration metric: took 23.628873ms to restartPrimaryControlPlane
	I1101 09:42:58.016767  406120 kubeadm.go:403] duration metric: took 80.466912ms to StartCluster
	I1101 09:42:58.016787  406120 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:58.016859  406120 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:42:58.019146  406120 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:42:58.019406  406120 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:42:54.375638  400655 addons.go:515] duration metric: took 550.809564ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:42:54.654510  400655 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-927869" context rescaled to 1 replicas
	W1101 09:42:56.154636  400655 node_ready.go:57] node "default-k8s-diff-port-927869" has "Ready":"False" status (will retry)
	I1101 09:42:58.019624  406120 config.go:182] Loaded profile config "old-k8s-version-106430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:42:58.019516  406120 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:42:58.019681  406120 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-106430"
	I1101 09:42:58.019692  406120 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-106430"
	W1101 09:42:58.019698  406120 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:42:58.019725  406120 host.go:66] Checking if "old-k8s-version-106430" exists ...
	I1101 09:42:58.019740  406120 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-106430"
	I1101 09:42:58.019723  406120 addons.go:70] Setting dashboard=true in profile "old-k8s-version-106430"
	I1101 09:42:58.019762  406120 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-106430"
	I1101 09:42:58.019781  406120 addons.go:239] Setting addon dashboard=true in "old-k8s-version-106430"
	W1101 09:42:58.019793  406120 addons.go:248] addon dashboard should already be in state true
	I1101 09:42:58.019834  406120 host.go:66] Checking if "old-k8s-version-106430" exists ...
	I1101 09:42:58.020108  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.020244  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.020320  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.022567  406120 out.go:179] * Verifying Kubernetes components...
	I1101 09:42:58.024092  406120 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:42:58.047043  406120 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-106430"
	W1101 09:42:58.047078  406120 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:42:58.047127  406120 host.go:66] Checking if "old-k8s-version-106430" exists ...
	I1101 09:42:58.047350  406120 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:42:58.048017  406120 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:42:58.048837  406120 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:58.048858  406120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:42:58.048940  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:58.051652  406120 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:42:58.052846  406120 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:42:58.053934  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:42:58.053961  406120 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:42:58.054033  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:58.088042  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:58.088630  406120 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:58.088657  406120 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:42:58.088715  406120 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:42:58.089200  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:58.115133  406120 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:42:58.176448  406120 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:42:58.191014  406120 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-106430" to be "Ready" ...
	I1101 09:42:58.206351  406120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:42:58.206583  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:42:58.206597  406120 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:42:58.222704  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:42:58.222727  406120 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:42:58.234887  406120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:42:58.238967  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:42:58.238997  406120 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:42:58.255653  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:42:58.255679  406120 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:42:58.272580  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:42:58.272613  406120 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:42:58.290112  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:42:58.290144  406120 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:42:58.310526  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:42:58.310555  406120 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:42:58.330234  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:42:58.330264  406120 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:42:58.346642  406120 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:42:58.346672  406120 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:42:58.360999  406120 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:43:00.187111  406120 node_ready.go:49] node "old-k8s-version-106430" is "Ready"
	I1101 09:43:00.187161  406120 node_ready.go:38] duration metric: took 1.996099939s for node "old-k8s-version-106430" to be "Ready" ...
	I1101 09:43:00.187179  406120 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:43:00.187255  406120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:43:01.024412  406120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.818015558s)
	I1101 09:43:01.024466  406120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.789491554s)
	I1101 09:43:01.442889  406120 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.081846516s)
	I1101 09:43:01.442959  406120 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.255677206s)
	I1101 09:43:01.442996  406120 api_server.go:72] duration metric: took 3.423552979s to wait for apiserver process to appear ...
	I1101 09:43:01.443069  406120 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:43:01.443095  406120 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:43:01.444371  406120 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-106430 addons enable metrics-server
	
	I1101 09:43:01.445665  406120 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1101 09:42:59.890173  396593 node_ready.go:57] node "embed-certs-214580" has "Ready":"False" status (will retry)
	I1101 09:43:01.394076  396593 node_ready.go:49] node "embed-certs-214580" is "Ready"
	I1101 09:43:01.394119  396593 node_ready.go:38] duration metric: took 10.507924999s for node "embed-certs-214580" to be "Ready" ...
	I1101 09:43:01.394138  396593 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:43:01.394196  396593 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:43:01.411195  396593 api_server.go:72] duration metric: took 10.918334459s to wait for apiserver process to appear ...
	I1101 09:43:01.411227  396593 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:43:01.411253  396593 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:43:01.417293  396593 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 09:43:01.418493  396593 api_server.go:141] control plane version: v1.34.1
	I1101 09:43:01.418528  396593 api_server.go:131] duration metric: took 7.293707ms to wait for apiserver health ...
	I1101 09:43:01.418538  396593 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:43:01.422348  396593 system_pods.go:59] 8 kube-system pods found
	I1101 09:43:01.422388  396593 system_pods.go:61] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:01.422397  396593 system_pods.go:61] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running
	I1101 09:43:01.422406  396593 system_pods.go:61] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running
	I1101 09:43:01.422411  396593 system_pods.go:61] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running
	I1101 09:43:01.422416  396593 system_pods.go:61] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running
	I1101 09:43:01.422419  396593 system_pods.go:61] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running
	I1101 09:43:01.422423  396593 system_pods.go:61] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running
	I1101 09:43:01.422429  396593 system_pods.go:61] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:01.422437  396593 system_pods.go:74] duration metric: took 3.892949ms to wait for pod list to return data ...
	I1101 09:43:01.422451  396593 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:43:01.425015  396593 default_sa.go:45] found service account: "default"
	I1101 09:43:01.425041  396593 default_sa.go:55] duration metric: took 2.583837ms for default service account to be created ...
	I1101 09:43:01.425054  396593 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:43:01.428957  396593 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:01.428995  396593 system_pods.go:89] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:01.429003  396593 system_pods.go:89] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running
	I1101 09:43:01.429012  396593 system_pods.go:89] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running
	I1101 09:43:01.429038  396593 system_pods.go:89] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running
	I1101 09:43:01.429048  396593 system_pods.go:89] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running
	I1101 09:43:01.429053  396593 system_pods.go:89] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running
	I1101 09:43:01.429062  396593 system_pods.go:89] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running
	I1101 09:43:01.429071  396593 system_pods.go:89] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:01.429119  396593 retry.go:31] will retry after 227.328238ms: missing components: kube-dns
	I1101 09:43:01.662291  396593 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:01.662326  396593 system_pods.go:89] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:01.662331  396593 system_pods.go:89] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running
	I1101 09:43:01.662337  396593 system_pods.go:89] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running
	I1101 09:43:01.662341  396593 system_pods.go:89] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running
	I1101 09:43:01.662345  396593 system_pods.go:89] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running
	I1101 09:43:01.663085  396593 system_pods.go:89] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running
	I1101 09:43:01.663107  396593 system_pods.go:89] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running
	I1101 09:43:01.663119  396593 system_pods.go:89] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:01.663159  396593 retry.go:31] will retry after 276.226658ms: missing components: kube-dns
	I1101 09:43:01.944474  396593 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:01.944505  396593 system_pods.go:89] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Running
	I1101 09:43:01.944510  396593 system_pods.go:89] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running
	I1101 09:43:01.944516  396593 system_pods.go:89] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running
	I1101 09:43:01.944520  396593 system_pods.go:89] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running
	I1101 09:43:01.944523  396593 system_pods.go:89] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running
	I1101 09:43:01.944526  396593 system_pods.go:89] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running
	I1101 09:43:01.944540  396593 system_pods.go:89] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running
	I1101 09:43:01.944543  396593 system_pods.go:89] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Running
	I1101 09:43:01.944551  396593 system_pods.go:126] duration metric: took 519.491033ms to wait for k8s-apps to be running ...
	I1101 09:43:01.944559  396593 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:43:01.944612  396593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:01.961365  396593 system_svc.go:56] duration metric: took 16.790691ms WaitForService to wait for kubelet
	I1101 09:43:01.961443  396593 kubeadm.go:587] duration metric: took 11.468588724s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:01.961481  396593 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:43:01.965235  396593 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:43:01.965267  396593 node_conditions.go:123] node cpu capacity is 8
	I1101 09:43:01.965281  396593 node_conditions.go:105] duration metric: took 3.794232ms to run NodePressure ...
	I1101 09:43:01.965293  396593 start.go:242] waiting for startup goroutines ...
	I1101 09:43:01.965300  396593 start.go:247] waiting for cluster config update ...
	I1101 09:43:01.965311  396593 start.go:256] writing updated cluster config ...
	I1101 09:43:01.965628  396593 ssh_runner.go:195] Run: rm -f paused
	I1101 09:43:01.970967  396593 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:01.976406  396593 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cmnj8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:01.982287  396593 pod_ready.go:94] pod "coredns-66bc5c9577-cmnj8" is "Ready"
	I1101 09:43:01.982321  396593 pod_ready.go:86] duration metric: took 5.884528ms for pod "coredns-66bc5c9577-cmnj8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:01.985036  396593 pod_ready.go:83] waiting for pod "etcd-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:01.990273  396593 pod_ready.go:94] pod "etcd-embed-certs-214580" is "Ready"
	I1101 09:43:01.990301  396593 pod_ready.go:86] duration metric: took 5.235782ms for pod "etcd-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:01.992691  396593 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:01.997959  396593 pod_ready.go:94] pod "kube-apiserver-embed-certs-214580" is "Ready"
	I1101 09:43:01.997981  396593 pod_ready.go:86] duration metric: took 5.265586ms for pod "kube-apiserver-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:02.000124  396593 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:02.377242  396593 pod_ready.go:94] pod "kube-controller-manager-embed-certs-214580" is "Ready"
	I1101 09:43:02.377272  396593 pod_ready.go:86] duration metric: took 377.124261ms for pod "kube-controller-manager-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:42:58.654079  400655 node_ready.go:57] node "default-k8s-diff-port-927869" has "Ready":"False" status (will retry)
	W1101 09:43:00.658594  400655 node_ready.go:57] node "default-k8s-diff-port-927869" has "Ready":"False" status (will retry)
	I1101 09:43:02.576590  396593 pod_ready.go:83] waiting for pod "kube-proxy-49j45" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:02.976290  396593 pod_ready.go:94] pod "kube-proxy-49j45" is "Ready"
	I1101 09:43:02.976316  396593 pod_ready.go:86] duration metric: took 399.691169ms for pod "kube-proxy-49j45" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:03.177497  396593 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:03.576578  396593 pod_ready.go:94] pod "kube-scheduler-embed-certs-214580" is "Ready"
	I1101 09:43:03.576609  396593 pod_ready.go:86] duration metric: took 399.080901ms for pod "kube-scheduler-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:03.576624  396593 pod_ready.go:40] duration metric: took 1.605614697s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:03.627748  396593 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:43:03.630280  396593 out.go:179] * Done! kubectl is now configured to use "embed-certs-214580" cluster and "default" namespace by default
	I1101 09:43:01.446674  406120 addons.go:515] duration metric: took 3.427169236s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1101 09:43:01.447744  406120 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 09:43:01.448983  406120 api_server.go:141] control plane version: v1.28.0
	I1101 09:43:01.449007  406120 api_server.go:131] duration metric: took 5.9302ms to wait for apiserver health ...
	I1101 09:43:01.449015  406120 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:43:01.453386  406120 system_pods.go:59] 8 kube-system pods found
	I1101 09:43:01.453440  406120 system_pods.go:61] "coredns-5dd5756b68-xh2rf" [2dc48063-a93a-46c9-b6da-451a12b954c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:01.453452  406120 system_pods.go:61] "etcd-old-k8s-version-106430" [6f7386a3-1337-464f-a414-cd3c59f37e83] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:01.453460  406120 system_pods.go:61] "kindnet-5v6hn" [68338c9c-3108-4c9f-8fed-214858c90ef5] Running
	I1101 09:43:01.453468  406120 system_pods.go:61] "kube-apiserver-old-k8s-version-106430" [c2554645-936b-4a63-8090-580b3bef9961] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:01.453475  406120 system_pods.go:61] "kube-controller-manager-old-k8s-version-106430" [7bd6d2ff-1cd3-48cd-89f6-2b3c68fda714] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:01.453486  406120 system_pods.go:61] "kube-proxy-zqs8f" [834c3d0a-03fc-480c-a4c6-9f010159b1f9] Running
	I1101 09:43:01.453494  406120 system_pods.go:61] "kube-scheduler-old-k8s-version-106430" [8dd03d37-0e38-42c2-8c96-795cf8cf7d73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:01.453503  406120 system_pods.go:61] "storage-provisioner" [b8fde0f9-bc13-41ca-9adc-2b0edc592938] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:01.453512  406120 system_pods.go:74] duration metric: took 4.489662ms to wait for pod list to return data ...
	I1101 09:43:01.453524  406120 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:43:01.455675  406120 default_sa.go:45] found service account: "default"
	I1101 09:43:01.455701  406120 default_sa.go:55] duration metric: took 2.169679ms for default service account to be created ...
	I1101 09:43:01.455712  406120 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:43:01.460873  406120 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:01.460972  406120 system_pods.go:89] "coredns-5dd5756b68-xh2rf" [2dc48063-a93a-46c9-b6da-451a12b954c3] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:01.460992  406120 system_pods.go:89] "etcd-old-k8s-version-106430" [6f7386a3-1337-464f-a414-cd3c59f37e83] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:01.461000  406120 system_pods.go:89] "kindnet-5v6hn" [68338c9c-3108-4c9f-8fed-214858c90ef5] Running
	I1101 09:43:01.461020  406120 system_pods.go:89] "kube-apiserver-old-k8s-version-106430" [c2554645-936b-4a63-8090-580b3bef9961] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:01.461032  406120 system_pods.go:89] "kube-controller-manager-old-k8s-version-106430" [7bd6d2ff-1cd3-48cd-89f6-2b3c68fda714] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:01.461041  406120 system_pods.go:89] "kube-proxy-zqs8f" [834c3d0a-03fc-480c-a4c6-9f010159b1f9] Running
	I1101 09:43:01.461050  406120 system_pods.go:89] "kube-scheduler-old-k8s-version-106430" [8dd03d37-0e38-42c2-8c96-795cf8cf7d73] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:01.461057  406120 system_pods.go:89] "storage-provisioner" [b8fde0f9-bc13-41ca-9adc-2b0edc592938] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:01.461073  406120 system_pods.go:126] duration metric: took 5.35372ms to wait for k8s-apps to be running ...
	I1101 09:43:01.461084  406120 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:43:01.461171  406120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:01.478042  406120 system_svc.go:56] duration metric: took 16.947484ms WaitForService to wait for kubelet
	I1101 09:43:01.478073  406120 kubeadm.go:587] duration metric: took 3.458630412s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:01.478102  406120 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:43:01.481243  406120 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:43:01.481271  406120 node_conditions.go:123] node cpu capacity is 8
	I1101 09:43:01.481287  406120 node_conditions.go:105] duration metric: took 3.179335ms to run NodePressure ...
	I1101 09:43:01.481305  406120 start.go:242] waiting for startup goroutines ...
	I1101 09:43:01.481315  406120 start.go:247] waiting for cluster config update ...
	I1101 09:43:01.481335  406120 start.go:256] writing updated cluster config ...
	I1101 09:43:01.481614  406120 ssh_runner.go:195] Run: rm -f paused
	I1101 09:43:01.486105  406120 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:01.491368  406120 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xh2rf" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:43:03.497602  406120 pod_ready.go:104] pod "coredns-5dd5756b68-xh2rf" is not "Ready", error: <nil>
	W1101 09:43:03.154362  400655 node_ready.go:57] node "default-k8s-diff-port-927869" has "Ready":"False" status (will retry)
	I1101 09:43:05.154723  400655 node_ready.go:49] node "default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:05.154755  400655 node_ready.go:38] duration metric: took 11.003549181s for node "default-k8s-diff-port-927869" to be "Ready" ...
	I1101 09:43:05.154769  400655 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:43:05.154817  400655 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:43:05.167386  400655 api_server.go:72] duration metric: took 11.342569622s to wait for apiserver process to appear ...
	I1101 09:43:05.167411  400655 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:43:05.167431  400655 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 09:43:05.171809  400655 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 09:43:05.172865  400655 api_server.go:141] control plane version: v1.34.1
	I1101 09:43:05.172890  400655 api_server.go:131] duration metric: took 5.472974ms to wait for apiserver health ...
	I1101 09:43:05.172899  400655 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:43:05.176330  400655 system_pods.go:59] 8 kube-system pods found
	I1101 09:43:05.176405  400655 system_pods.go:61] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:05.176414  400655 system_pods.go:61] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running
	I1101 09:43:05.176422  400655 system_pods.go:61] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running
	I1101 09:43:05.176427  400655 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running
	I1101 09:43:05.176433  400655 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running
	I1101 09:43:05.176439  400655 system_pods.go:61] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running
	I1101 09:43:05.176447  400655 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running
	I1101 09:43:05.176455  400655 system_pods.go:61] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:05.176463  400655 system_pods.go:74] duration metric: took 3.558504ms to wait for pod list to return data ...
	I1101 09:43:05.176475  400655 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:43:05.181273  400655 default_sa.go:45] found service account: "default"
	I1101 09:43:05.181302  400655 default_sa.go:55] duration metric: took 4.820177ms for default service account to be created ...
	I1101 09:43:05.181312  400655 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:43:05.184340  400655 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:05.184370  400655 system_pods.go:89] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:05.184375  400655 system_pods.go:89] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running
	I1101 09:43:05.184381  400655 system_pods.go:89] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running
	I1101 09:43:05.184384  400655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running
	I1101 09:43:05.184388  400655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running
	I1101 09:43:05.184392  400655 system_pods.go:89] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running
	I1101 09:43:05.184395  400655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running
	I1101 09:43:05.184400  400655 system_pods.go:89] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:05.184422  400655 retry.go:31] will retry after 237.080714ms: missing components: kube-dns
	I1101 09:43:05.426136  400655 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:05.426172  400655 system_pods.go:89] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:05.426178  400655 system_pods.go:89] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running
	I1101 09:43:05.426184  400655 system_pods.go:89] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running
	I1101 09:43:05.426188  400655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running
	I1101 09:43:05.426191  400655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running
	I1101 09:43:05.426195  400655 system_pods.go:89] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running
	I1101 09:43:05.426198  400655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running
	I1101 09:43:05.426204  400655 system_pods.go:89] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:05.426223  400655 retry.go:31] will retry after 237.118658ms: missing components: kube-dns
	I1101 09:43:05.666570  400655 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:05.666625  400655 system_pods.go:89] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:05.666633  400655 system_pods.go:89] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running
	I1101 09:43:05.666642  400655 system_pods.go:89] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running
	I1101 09:43:05.666648  400655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running
	I1101 09:43:05.666653  400655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running
	I1101 09:43:05.666658  400655 system_pods.go:89] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running
	I1101 09:43:05.666663  400655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running
	I1101 09:43:05.666672  400655 system_pods.go:89] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:05.666691  400655 retry.go:31] will retry after 395.981375ms: missing components: kube-dns
	I1101 09:43:06.068028  400655 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:06.068086  400655 system_pods.go:89] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Running
	I1101 09:43:06.068096  400655 system_pods.go:89] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running
	I1101 09:43:06.068102  400655 system_pods.go:89] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running
	I1101 09:43:06.068110  400655 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running
	I1101 09:43:06.068116  400655 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running
	I1101 09:43:06.068121  400655 system_pods.go:89] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running
	I1101 09:43:06.068127  400655 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running
	I1101 09:43:06.068139  400655 system_pods.go:89] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Running
	I1101 09:43:06.068149  400655 system_pods.go:126] duration metric: took 886.830082ms to wait for k8s-apps to be running ...
	I1101 09:43:06.068163  400655 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:43:06.068224  400655 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:06.083264  400655 system_svc.go:56] duration metric: took 15.091461ms WaitForService to wait for kubelet
	I1101 09:43:06.083307  400655 kubeadm.go:587] duration metric: took 12.25849379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:06.083332  400655 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:43:06.086982  400655 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:43:06.087019  400655 node_conditions.go:123] node cpu capacity is 8
	I1101 09:43:06.087036  400655 node_conditions.go:105] duration metric: took 3.698423ms to run NodePressure ...
	I1101 09:43:06.087054  400655 start.go:242] waiting for startup goroutines ...
	I1101 09:43:06.087064  400655 start.go:247] waiting for cluster config update ...
	I1101 09:43:06.087077  400655 start.go:256] writing updated cluster config ...
	I1101 09:43:06.087420  400655 ssh_runner.go:195] Run: rm -f paused
	I1101 09:43:06.092662  400655 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:06.097988  400655 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mlk9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.103499  400655 pod_ready.go:94] pod "coredns-66bc5c9577-mlk9t" is "Ready"
	I1101 09:43:06.103537  400655 pod_ready.go:86] duration metric: took 5.514026ms for pod "coredns-66bc5c9577-mlk9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.106066  400655 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.110682  400655 pod_ready.go:94] pod "etcd-default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:06.110711  400655 pod_ready.go:86] duration metric: took 4.616826ms for pod "etcd-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.113186  400655 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.117832  400655 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:06.117855  400655 pod_ready.go:86] duration metric: took 4.643427ms for pod "kube-apiserver-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.120106  400655 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.498078  400655 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:06.498104  400655 pod_ready.go:86] duration metric: took 377.974782ms for pod "kube-controller-manager-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:06.699099  400655 pod_ready.go:83] waiting for pod "kube-proxy-dszvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:07.098234  400655 pod_ready.go:94] pod "kube-proxy-dszvg" is "Ready"
	I1101 09:43:07.098264  400655 pod_ready.go:86] duration metric: took 399.13582ms for pod "kube-proxy-dszvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:07.298565  400655 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:07.697890  400655 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:07.697963  400655 pod_ready.go:86] duration metric: took 399.372786ms for pod "kube-scheduler-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:07.697978  400655 pod_ready.go:40] duration metric: took 1.605281665s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:07.748049  400655 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:43:07.750223  400655 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-927869" cluster and "default" namespace by default
	W1101 09:43:05.996507  406120 pod_ready.go:104] pod "coredns-5dd5756b68-xh2rf" is not "Ready", error: <nil>
	W1101 09:43:07.997891  406120 pod_ready.go:104] pod "coredns-5dd5756b68-xh2rf" is not "Ready", error: <nil>
	W1101 09:43:10.497973  406120 pod_ready.go:104] pod "coredns-5dd5756b68-xh2rf" is not "Ready", error: <nil>
	W1101 09:43:12.498979  406120 pod_ready.go:104] pod "coredns-5dd5756b68-xh2rf" is not "Ready", error: <nil>
	W1101 09:43:14.998094  406120 pod_ready.go:104] pod "coredns-5dd5756b68-xh2rf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:43:05 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:05.321819733Z" level=info msg="Starting container: 232e058550115f2956c5c96358dfa83df189e3caa1321b4df006cc09bda62926" id=7844b0fb-4399-434b-9e2a-38e4c20536b2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:05 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:05.323832123Z" level=info msg="Started container" PID=1817 containerID=232e058550115f2956c5c96358dfa83df189e3caa1321b4df006cc09bda62926 description=kube-system/coredns-66bc5c9577-mlk9t/coredns id=7844b0fb-4399-434b-9e2a-38e4c20536b2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=81fe07425f6b47aed410e128c6a3ddfadd9ac056d54b21f9ae59376a50a5c291
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.237381436Z" level=info msg="Running pod sandbox: default/busybox/POD" id=64aea5dd-cbd2-4ba9-bbc3-8fbaed5b6e2a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.237495394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.24378824Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0fcedb8ca1303d54bbfb549eb8ea1215b0e06ff110cef9c2dafe3d4818326700 UID:b82218bf-2168-45f8-93dd-1a8f99a46423 NetNS:/var/run/netns/ce868601-5514-4cc1-b2cd-e369238c0465 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e9a4a8}] Aliases:map[]}"
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.243870381Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.253819661Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0fcedb8ca1303d54bbfb549eb8ea1215b0e06ff110cef9c2dafe3d4818326700 UID:b82218bf-2168-45f8-93dd-1a8f99a46423 NetNS:/var/run/netns/ce868601-5514-4cc1-b2cd-e369238c0465 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e9a4a8}] Aliases:map[]}"
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.254000605Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.254996433Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.255760988Z" level=info msg="Ran pod sandbox 0fcedb8ca1303d54bbfb549eb8ea1215b0e06ff110cef9c2dafe3d4818326700 with infra container: default/busybox/POD" id=64aea5dd-cbd2-4ba9-bbc3-8fbaed5b6e2a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.256972424Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e5b93894-138d-43e5-9dec-30e640651113 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.257100813Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e5b93894-138d-43e5-9dec-30e640651113 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.257149796Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e5b93894-138d-43e5-9dec-30e640651113 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.258082324Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b75d3db1-4ee6-4114-bd2e-50fc00099d62 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:43:08 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:08.259774494Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 01 09:43:10 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:10.348507682Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b75d3db1-4ee6-4114-bd2e-50fc00099d62 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:43:10 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:10.349372073Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=72a73b88-4816-4ea9-95ac-6e529b9f57a7 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:10 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:10.350714952Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5f9b646f-190f-4c25-ba6b-7c4e07652c3e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:10 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:10.354233004Z" level=info msg="Creating container: default/busybox/busybox" id=33331792-cadd-4a5f-817d-ee590ec9dccf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:10 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:10.354356257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:10 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:10.357898103Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:10 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:10.358342692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:10 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:10.387112676Z" level=info msg="Created container 881476900b3e6f0d42758b09e1c978d830156b5297062b547d2b2abb93821123: default/busybox/busybox" id=33331792-cadd-4a5f-817d-ee590ec9dccf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:10 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:10.387978533Z" level=info msg="Starting container: 881476900b3e6f0d42758b09e1c978d830156b5297062b547d2b2abb93821123" id=21c792eb-e7df-45a9-aaf9-b64ad12a05f9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:10 default-k8s-diff-port-927869 crio[771]: time="2025-11-01T09:43:10.390001668Z" level=info msg="Started container" PID=1894 containerID=881476900b3e6f0d42758b09e1c978d830156b5297062b547d2b2abb93821123 description=default/busybox/busybox id=21c792eb-e7df-45a9-aaf9-b64ad12a05f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0fcedb8ca1303d54bbfb549eb8ea1215b0e06ff110cef9c2dafe3d4818326700
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	881476900b3e6       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   0fcedb8ca1303       busybox                                                default
	232e058550115       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   81fe07425f6b4       coredns-66bc5c9577-mlk9t                               kube-system
	c36e1adf42e96       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   3f654d253c2b4       storage-provisioner                                    kube-system
	9020ebe9db510       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   57731ade4a699       kube-proxy-dszvg                                       kube-system
	d152c455aa404       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   c5ba6d1d41f7f       kindnet-g9zdl                                          kube-system
	3abef6dab0b03       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   6a4ec58d8683b       kube-apiserver-default-k8s-diff-port-927869            kube-system
	049836a40b795       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   52e0ddc0d3fb2       kube-scheduler-default-k8s-diff-port-927869            kube-system
	119478bb908b8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   01fa08dcc3f8a       kube-controller-manager-default-k8s-diff-port-927869   kube-system
	57cb07abeb8d4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   f0192574f2525       etcd-default-k8s-diff-port-927869                      kube-system
	
	
	==> coredns [232e058550115f2956c5c96358dfa83df189e3caa1321b4df006cc09bda62926] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38657 - 47518 "HINFO IN 4492636394678936518.2822822200482807698. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.067468288s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-927869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-927869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=default-k8s-diff-port-927869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_42_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:42:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-927869
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:43:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:43:04 +0000   Sat, 01 Nov 2025 09:42:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:43:04 +0000   Sat, 01 Nov 2025 09:42:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:43:04 +0000   Sat, 01 Nov 2025 09:42:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:43:04 +0000   Sat, 01 Nov 2025 09:43:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-927869
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f6bc8c84-79e6-433c-bb02-212f45767f33
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-mlk9t                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-927869                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-g9zdl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-927869             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-927869    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-dszvg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-927869             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node default-k8s-diff-port-927869 event: Registered Node default-k8s-diff-port-927869 in Controller
	  Normal  NodeReady                13s                kubelet          Node default-k8s-diff-port-927869 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [57cb07abeb8d4ccf05d7b6fb028dce48be9c8f3af482a24b3d2b0dd3af02339a] <==
	{"level":"warn","ts":"2025-11-01T09:42:45.085572Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.092830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.101899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.109796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.117361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.124361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.138638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.145855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.153407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.161184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.168952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.177007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.185312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.192699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.201112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.208292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.215302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.223281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.230584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.238201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.255588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.260263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.267584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.275695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:42:45.337020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34570","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:43:17 up  1:25,  0 user,  load average: 4.62, 4.44, 2.85
	Linux default-k8s-diff-port-927869 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d152c455aa404f71a44c578a45982de27cd3b37a065634fe69e9ea4970a55ca8] <==
	I1101 09:42:54.554815       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:42:54.555103       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:42:54.555256       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:42:54.555269       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:42:54.555282       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:42:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:42:54.762048       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:42:54.762069       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:42:54.762076       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:42:54.762350       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:42:55.151453       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:42:55.151495       1 metrics.go:72] Registering metrics
	I1101 09:42:55.151582       1 controller.go:711] "Syncing nftables rules"
	I1101 09:43:04.763285       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:43:04.763361       1 main.go:301] handling current node
	I1101 09:43:14.763420       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:43:14.763464       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3abef6dab0b03b5cdfa6f5599c93c3521da2b321fa2b8feb74c22af3b39cf547] <==
	I1101 09:42:45.919479       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:42:45.926628       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:42:45.926666       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1101 09:42:45.929588       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:42:45.929591       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:42:45.933478       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:42:45.933748       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:42:46.800561       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:42:46.804667       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:42:46.804692       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:42:47.371519       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:42:47.434692       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:42:47.498761       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:42:47.509260       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1101 09:42:47.510847       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:42:47.516723       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:42:47.857422       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:42:48.751111       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:42:48.762400       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:42:48.771623       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:42:53.561728       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:42:53.813808       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:42:53.818151       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:42:53.910527       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1101 09:43:16.049410       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:45906: use of closed network connection
	
	
	==> kube-controller-manager [119478bb908b808f6b98d6d45f321bba791cb29fd80d927a1ed52a0ae9bdb13c] <==
	I1101 09:42:52.856373       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:42:52.856393       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:42:52.856433       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:42:52.856475       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:42:52.856517       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:42:52.857763       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:42:52.857822       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:42:52.857847       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:42:52.857890       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:42:52.857865       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:42:52.857851       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:42:52.857877       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:42:52.857938       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:42:52.858115       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:42:52.859190       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:42:52.860451       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:42:52.860588       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:42:52.860703       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:42:52.861013       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:42:52.867026       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:42:52.869249       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:42:52.876684       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:42:52.882951       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:42:52.884011       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:43:07.809024       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9020ebe9db5106705b1fe931ce220f1666a322b5fbdc7cbbcd3052acfd8e37f5] <==
	I1101 09:42:54.346291       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:42:54.421463       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:42:54.522070       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:42:54.522128       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:42:54.522228       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:42:54.545001       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:42:54.545061       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:42:54.550688       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:42:54.552801       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:42:54.552830       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:42:54.554434       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:42:54.554460       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:42:54.554462       1 config.go:200] "Starting service config controller"
	I1101 09:42:54.554480       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:42:54.554501       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:42:54.554507       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:42:54.554525       1 config.go:309] "Starting node config controller"
	I1101 09:42:54.554530       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:42:54.554537       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:42:54.655474       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:42:54.655497       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:42:54.655477       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [049836a40b795187c10ba6368d517b6b53e124d4082418ff876f45c3775bde2b] <==
	E1101 09:42:45.878387       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:42:45.878413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:42:45.878220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:42:45.878099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:42:45.878671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:42:45.878729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:42:45.878791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:42:45.878866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:42:45.878966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:42:45.878989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:42:46.800015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:42:46.830414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:42:46.861988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:42:46.946292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:42:46.969040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:42:46.991790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:42:47.002107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:42:47.015807       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:42:47.047364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:42:47.073691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:42:47.074525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:42:47.090267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:42:47.127545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:42:47.169816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1101 09:42:48.875263       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:42:49 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:49.644813    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-927869" podStartSLOduration=1.6447900469999999 podStartE2EDuration="1.644790047s" podCreationTimestamp="2025-11-01 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:49.64479084 +0000 UTC m=+1.143393700" watchObservedRunningTime="2025-11-01 09:42:49.644790047 +0000 UTC m=+1.143392906"
	Nov 01 09:42:49 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:49.654149    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-927869" podStartSLOduration=1.654125728 podStartE2EDuration="1.654125728s" podCreationTimestamp="2025-11-01 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:49.654114448 +0000 UTC m=+1.152717307" watchObservedRunningTime="2025-11-01 09:42:49.654125728 +0000 UTC m=+1.152728600"
	Nov 01 09:42:49 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:49.681055    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-927869" podStartSLOduration=1.6810318560000002 podStartE2EDuration="1.681031856s" podCreationTimestamp="2025-11-01 09:42:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:49.66969277 +0000 UTC m=+1.168295614" watchObservedRunningTime="2025-11-01 09:42:49.681031856 +0000 UTC m=+1.179634715"
	Nov 01 09:42:49 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:49.693218    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-927869" podStartSLOduration=2.693194062 podStartE2EDuration="2.693194062s" podCreationTimestamp="2025-11-01 09:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:49.681001298 +0000 UTC m=+1.179604180" watchObservedRunningTime="2025-11-01 09:42:49.693194062 +0000 UTC m=+1.191796917"
	Nov 01 09:42:52 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:52.944036    1297 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 01 09:42:52 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:52.945866    1297 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 01 09:42:54 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:54.020662    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8a5182c-c2b0-4b2b-a8cf-531baef0a83d-xtables-lock\") pod \"kindnet-g9zdl\" (UID: \"e8a5182c-c2b0-4b2b-a8cf-531baef0a83d\") " pod="kube-system/kindnet-g9zdl"
	Nov 01 09:42:54 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:54.020720    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxhx8\" (UniqueName: \"kubernetes.io/projected/e8a5182c-c2b0-4b2b-a8cf-531baef0a83d-kube-api-access-cxhx8\") pod \"kindnet-g9zdl\" (UID: \"e8a5182c-c2b0-4b2b-a8cf-531baef0a83d\") " pod="kube-system/kindnet-g9zdl"
	Nov 01 09:42:54 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:54.020755    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/17bd8a33-3ad1-4195-8ff9-dd78085ab995-kube-proxy\") pod \"kube-proxy-dszvg\" (UID: \"17bd8a33-3ad1-4195-8ff9-dd78085ab995\") " pod="kube-system/kube-proxy-dszvg"
	Nov 01 09:42:54 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:54.020828    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/17bd8a33-3ad1-4195-8ff9-dd78085ab995-xtables-lock\") pod \"kube-proxy-dszvg\" (UID: \"17bd8a33-3ad1-4195-8ff9-dd78085ab995\") " pod="kube-system/kube-proxy-dszvg"
	Nov 01 09:42:54 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:54.020922    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e8a5182c-c2b0-4b2b-a8cf-531baef0a83d-cni-cfg\") pod \"kindnet-g9zdl\" (UID: \"e8a5182c-c2b0-4b2b-a8cf-531baef0a83d\") " pod="kube-system/kindnet-g9zdl"
	Nov 01 09:42:54 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:54.020997    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8a5182c-c2b0-4b2b-a8cf-531baef0a83d-lib-modules\") pod \"kindnet-g9zdl\" (UID: \"e8a5182c-c2b0-4b2b-a8cf-531baef0a83d\") " pod="kube-system/kindnet-g9zdl"
	Nov 01 09:42:54 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:54.021032    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7l2b\" (UniqueName: \"kubernetes.io/projected/17bd8a33-3ad1-4195-8ff9-dd78085ab995-kube-api-access-p7l2b\") pod \"kube-proxy-dszvg\" (UID: \"17bd8a33-3ad1-4195-8ff9-dd78085ab995\") " pod="kube-system/kube-proxy-dszvg"
	Nov 01 09:42:54 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:54.021064    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/17bd8a33-3ad1-4195-8ff9-dd78085ab995-lib-modules\") pod \"kube-proxy-dszvg\" (UID: \"17bd8a33-3ad1-4195-8ff9-dd78085ab995\") " pod="kube-system/kube-proxy-dszvg"
	Nov 01 09:42:54 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:54.660687    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dszvg" podStartSLOduration=1.6606132200000001 podStartE2EDuration="1.66061322s" podCreationTimestamp="2025-11-01 09:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:54.647376165 +0000 UTC m=+6.145979025" watchObservedRunningTime="2025-11-01 09:42:54.66061322 +0000 UTC m=+6.159216079"
	Nov 01 09:42:54 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:42:54.993345    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-g9zdl" podStartSLOduration=1.99332123 podStartE2EDuration="1.99332123s" podCreationTimestamp="2025-11-01 09:42:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:42:54.671986349 +0000 UTC m=+6.170589207" watchObservedRunningTime="2025-11-01 09:42:54.99332123 +0000 UTC m=+6.491924089"
	Nov 01 09:43:04 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:43:04.938611    1297 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 01 09:43:05 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:43:05.002260    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/500c8e66-5d34-41b1-b23f-fe5858986803-config-volume\") pod \"coredns-66bc5c9577-mlk9t\" (UID: \"500c8e66-5d34-41b1-b23f-fe5858986803\") " pod="kube-system/coredns-66bc5c9577-mlk9t"
	Nov 01 09:43:05 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:43:05.002319    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw6kw\" (UniqueName: \"kubernetes.io/projected/500c8e66-5d34-41b1-b23f-fe5858986803-kube-api-access-jw6kw\") pod \"coredns-66bc5c9577-mlk9t\" (UID: \"500c8e66-5d34-41b1-b23f-fe5858986803\") " pod="kube-system/coredns-66bc5c9577-mlk9t"
	Nov 01 09:43:05 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:43:05.002370    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs67p\" (UniqueName: \"kubernetes.io/projected/0a2ed6da-a87e-4c60-b4b0-2e5644c99652-kube-api-access-cs67p\") pod \"storage-provisioner\" (UID: \"0a2ed6da-a87e-4c60-b4b0-2e5644c99652\") " pod="kube-system/storage-provisioner"
	Nov 01 09:43:05 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:43:05.002455    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0a2ed6da-a87e-4c60-b4b0-2e5644c99652-tmp\") pod \"storage-provisioner\" (UID: \"0a2ed6da-a87e-4c60-b4b0-2e5644c99652\") " pod="kube-system/storage-provisioner"
	Nov 01 09:43:05 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:43:05.676121    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mlk9t" podStartSLOduration=11.676096876999999 podStartE2EDuration="11.676096877s" podCreationTimestamp="2025-11-01 09:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:43:05.676078215 +0000 UTC m=+17.174681074" watchObservedRunningTime="2025-11-01 09:43:05.676096877 +0000 UTC m=+17.174699737"
	Nov 01 09:43:05 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:43:05.686280    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.68625936 podStartE2EDuration="11.68625936s" podCreationTimestamp="2025-11-01 09:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:43:05.685769971 +0000 UTC m=+17.184372832" watchObservedRunningTime="2025-11-01 09:43:05.68625936 +0000 UTC m=+17.184862219"
	Nov 01 09:43:08 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:43:08.019313    1297 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8vtx\" (UniqueName: \"kubernetes.io/projected/b82218bf-2168-45f8-93dd-1a8f99a46423-kube-api-access-z8vtx\") pod \"busybox\" (UID: \"b82218bf-2168-45f8-93dd-1a8f99a46423\") " pod="default/busybox"
	Nov 01 09:43:10 default-k8s-diff-port-927869 kubelet[1297]: I1101 09:43:10.687284    1297 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.5945591719999999 podStartE2EDuration="3.687261501s" podCreationTimestamp="2025-11-01 09:43:07 +0000 UTC" firstStartedPulling="2025-11-01 09:43:08.257525353 +0000 UTC m=+19.756128204" lastFinishedPulling="2025-11-01 09:43:10.350227695 +0000 UTC m=+21.848830533" observedRunningTime="2025-11-01 09:43:10.687049375 +0000 UTC m=+22.185652235" watchObservedRunningTime="2025-11-01 09:43:10.687261501 +0000 UTC m=+22.185864359"
	
	
	==> storage-provisioner [c36e1adf42e9635bfe6c42949d10087ca1be5a67441ab27c5dbe3470ec9baa03] <==
	I1101 09:43:05.331755       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:43:05.341673       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:43:05.341745       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1101 09:43:05.344486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:05.351496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:43:05.351700       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:43:05.351937       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-927869_eb9b74de-2e7a-4ea9-bcd2-316240b547e1!
	I1101 09:43:05.351882       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b2d53abe-0ced-4f54-9ebd-e07eb6295af8", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-927869_eb9b74de-2e7a-4ea9-bcd2-316240b547e1 became leader
	W1101 09:43:05.354980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:05.358725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:43:05.452761       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-927869_eb9b74de-2e7a-4ea9-bcd2-316240b547e1!
	W1101 09:43:07.361700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:07.367166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:09.370668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:09.375044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:11.378443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:11.383082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:13.386421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:13.390390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:15.394459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:15.402358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:17.406579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:17.414802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-927869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (8.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-106430 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-106430 --alsologtostderr -v=1: exit status 80 (2.525253306s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-106430 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:43:47.909517  419741 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:43:47.909794  419741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:43:47.909804  419741 out.go:374] Setting ErrFile to fd 2...
	I1101 09:43:47.909808  419741 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:43:47.910047  419741 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:43:47.910317  419741 out.go:368] Setting JSON to false
	I1101 09:43:47.910357  419741 mustload.go:66] Loading cluster: old-k8s-version-106430
	I1101 09:43:47.910718  419741 config.go:182] Loaded profile config "old-k8s-version-106430": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1101 09:43:47.911190  419741 cli_runner.go:164] Run: docker container inspect old-k8s-version-106430 --format={{.State.Status}}
	I1101 09:43:47.933250  419741 host.go:66] Checking if "old-k8s-version-106430" exists ...
	I1101 09:43:47.933554  419741 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:43:48.007817  419741 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-01 09:43:47.994611363 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:43:48.008746  419741 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-106430 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:43:48.010927  419741 out.go:179] * Pausing node old-k8s-version-106430 ... 
	I1101 09:43:48.012428  419741 host.go:66] Checking if "old-k8s-version-106430" exists ...
	I1101 09:43:48.012801  419741 ssh_runner.go:195] Run: systemctl --version
	I1101 09:43:48.012854  419741 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-106430
	I1101 09:43:48.033701  419741 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/old-k8s-version-106430/id_rsa Username:docker}
	I1101 09:43:48.146283  419741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:48.160423  419741 pause.go:52] kubelet running: true
	I1101 09:43:48.160501  419741 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:43:48.395222  419741 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:43:48.395368  419741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:43:48.471486  419741 cri.go:89] found id: "d6d2f3a1c4ad0645664556baf4dc3811e4149eaae2198bcfc7acb38d3e3375d9"
	I1101 09:43:48.471513  419741 cri.go:89] found id: "cf741a18f69cfdad6379c162ce83384fca951d4966d6fad0581fe96cf1e91908"
	I1101 09:43:48.471517  419741 cri.go:89] found id: "2cdc9fdfdcd814051d3dd77cdf55c61477f757879bf74593ca0dd53e09115dbc"
	I1101 09:43:48.471520  419741 cri.go:89] found id: "4691c84ef3d07ba57728ac09a4552d6f8bf0fcc54705555278513250908efe00"
	I1101 09:43:48.471522  419741 cri.go:89] found id: "fe67b21efed6f525df5544423bd5e06bbcd9f87fc9fcb3d89a75da908e8b778a"
	I1101 09:43:48.471525  419741 cri.go:89] found id: "67383aa07ea5a571b5780306e02b652d4100444e7d3375f13add5b076ff05a91"
	I1101 09:43:48.471527  419741 cri.go:89] found id: "21c9e16bfcb6f8965fbdbbf8b9f68b535b2252e3a9d58fe71811900f43d0178a"
	I1101 09:43:48.471530  419741 cri.go:89] found id: "227f629919dddfb2b5ef168af9cb9b28faa37ce01740e96b97f11cdff132e1a4"
	I1101 09:43:48.471532  419741 cri.go:89] found id: "2879f0fdda15ae5930efa2d324aedc5144c2f63543dc974f06fa3e3168b46588"
	I1101 09:43:48.471544  419741 cri.go:89] found id: "c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276"
	I1101 09:43:48.471547  419741 cri.go:89] found id: "a64f928570c8e93d7275efca3d34ba9452ed83d5461da05e9ccb47d00976bc06"
	I1101 09:43:48.471549  419741 cri.go:89] found id: ""
	I1101 09:43:48.471589  419741 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:43:48.484248  419741 retry.go:31] will retry after 140.612358ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:43:48Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:43:48.625648  419741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:48.650146  419741 pause.go:52] kubelet running: false
	I1101 09:43:48.650216  419741 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:43:48.801860  419741 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:43:48.801965  419741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:43:48.882028  419741 cri.go:89] found id: "d6d2f3a1c4ad0645664556baf4dc3811e4149eaae2198bcfc7acb38d3e3375d9"
	I1101 09:43:48.882052  419741 cri.go:89] found id: "cf741a18f69cfdad6379c162ce83384fca951d4966d6fad0581fe96cf1e91908"
	I1101 09:43:48.882058  419741 cri.go:89] found id: "2cdc9fdfdcd814051d3dd77cdf55c61477f757879bf74593ca0dd53e09115dbc"
	I1101 09:43:48.882063  419741 cri.go:89] found id: "4691c84ef3d07ba57728ac09a4552d6f8bf0fcc54705555278513250908efe00"
	I1101 09:43:48.882067  419741 cri.go:89] found id: "fe67b21efed6f525df5544423bd5e06bbcd9f87fc9fcb3d89a75da908e8b778a"
	I1101 09:43:48.882072  419741 cri.go:89] found id: "67383aa07ea5a571b5780306e02b652d4100444e7d3375f13add5b076ff05a91"
	I1101 09:43:48.882075  419741 cri.go:89] found id: "21c9e16bfcb6f8965fbdbbf8b9f68b535b2252e3a9d58fe71811900f43d0178a"
	I1101 09:43:48.882079  419741 cri.go:89] found id: "227f629919dddfb2b5ef168af9cb9b28faa37ce01740e96b97f11cdff132e1a4"
	I1101 09:43:48.882082  419741 cri.go:89] found id: "2879f0fdda15ae5930efa2d324aedc5144c2f63543dc974f06fa3e3168b46588"
	I1101 09:43:48.882096  419741 cri.go:89] found id: "c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276"
	I1101 09:43:48.882100  419741 cri.go:89] found id: "a64f928570c8e93d7275efca3d34ba9452ed83d5461da05e9ccb47d00976bc06"
	I1101 09:43:48.882104  419741 cri.go:89] found id: ""
	I1101 09:43:48.882165  419741 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:43:48.896535  419741 retry.go:31] will retry after 280.84822ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:43:48Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:43:49.178179  419741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:49.193004  419741 pause.go:52] kubelet running: false
	I1101 09:43:49.193083  419741 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:43:49.336207  419741 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:43:49.336295  419741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:43:49.408598  419741 cri.go:89] found id: "d6d2f3a1c4ad0645664556baf4dc3811e4149eaae2198bcfc7acb38d3e3375d9"
	I1101 09:43:49.408630  419741 cri.go:89] found id: "cf741a18f69cfdad6379c162ce83384fca951d4966d6fad0581fe96cf1e91908"
	I1101 09:43:49.408636  419741 cri.go:89] found id: "2cdc9fdfdcd814051d3dd77cdf55c61477f757879bf74593ca0dd53e09115dbc"
	I1101 09:43:49.408643  419741 cri.go:89] found id: "4691c84ef3d07ba57728ac09a4552d6f8bf0fcc54705555278513250908efe00"
	I1101 09:43:49.408647  419741 cri.go:89] found id: "fe67b21efed6f525df5544423bd5e06bbcd9f87fc9fcb3d89a75da908e8b778a"
	I1101 09:43:49.408652  419741 cri.go:89] found id: "67383aa07ea5a571b5780306e02b652d4100444e7d3375f13add5b076ff05a91"
	I1101 09:43:49.408656  419741 cri.go:89] found id: "21c9e16bfcb6f8965fbdbbf8b9f68b535b2252e3a9d58fe71811900f43d0178a"
	I1101 09:43:49.408660  419741 cri.go:89] found id: "227f629919dddfb2b5ef168af9cb9b28faa37ce01740e96b97f11cdff132e1a4"
	I1101 09:43:49.408663  419741 cri.go:89] found id: "2879f0fdda15ae5930efa2d324aedc5144c2f63543dc974f06fa3e3168b46588"
	I1101 09:43:49.408671  419741 cri.go:89] found id: "c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276"
	I1101 09:43:49.408675  419741 cri.go:89] found id: "a64f928570c8e93d7275efca3d34ba9452ed83d5461da05e9ccb47d00976bc06"
	I1101 09:43:49.408679  419741 cri.go:89] found id: ""
	I1101 09:43:49.408729  419741 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:43:49.424275  419741 retry.go:31] will retry after 655.182841ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:43:49Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:43:50.079765  419741 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:50.094754  419741 pause.go:52] kubelet running: false
	I1101 09:43:50.094820  419741 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:43:50.268207  419741 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:43:50.268288  419741 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:43:50.338289  419741 cri.go:89] found id: "d6d2f3a1c4ad0645664556baf4dc3811e4149eaae2198bcfc7acb38d3e3375d9"
	I1101 09:43:50.338318  419741 cri.go:89] found id: "cf741a18f69cfdad6379c162ce83384fca951d4966d6fad0581fe96cf1e91908"
	I1101 09:43:50.338324  419741 cri.go:89] found id: "2cdc9fdfdcd814051d3dd77cdf55c61477f757879bf74593ca0dd53e09115dbc"
	I1101 09:43:50.338328  419741 cri.go:89] found id: "4691c84ef3d07ba57728ac09a4552d6f8bf0fcc54705555278513250908efe00"
	I1101 09:43:50.338332  419741 cri.go:89] found id: "fe67b21efed6f525df5544423bd5e06bbcd9f87fc9fcb3d89a75da908e8b778a"
	I1101 09:43:50.338336  419741 cri.go:89] found id: "67383aa07ea5a571b5780306e02b652d4100444e7d3375f13add5b076ff05a91"
	I1101 09:43:50.338341  419741 cri.go:89] found id: "21c9e16bfcb6f8965fbdbbf8b9f68b535b2252e3a9d58fe71811900f43d0178a"
	I1101 09:43:50.338344  419741 cri.go:89] found id: "227f629919dddfb2b5ef168af9cb9b28faa37ce01740e96b97f11cdff132e1a4"
	I1101 09:43:50.338348  419741 cri.go:89] found id: "2879f0fdda15ae5930efa2d324aedc5144c2f63543dc974f06fa3e3168b46588"
	I1101 09:43:50.338356  419741 cri.go:89] found id: "c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276"
	I1101 09:43:50.338360  419741 cri.go:89] found id: "a64f928570c8e93d7275efca3d34ba9452ed83d5461da05e9ccb47d00976bc06"
	I1101 09:43:50.338419  419741 cri.go:89] found id: ""
	I1101 09:43:50.338509  419741 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:43:50.353009  419741 out.go:203] 
	W1101 09:43:50.354151  419741 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:43:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:43:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:43:50.354174  419741 out.go:285] * 
	* 
	W1101 09:43:50.358408  419741 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:43:50.359838  419741 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-106430 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-106430
helpers_test.go:243: (dbg) docker inspect old-k8s-version-106430:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca",
	        "Created": "2025-11-01T09:41:36.12631196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 406560,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:42:51.008788031Z",
	            "FinishedAt": "2025-11-01T09:42:49.79183264Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/hosts",
	        "LogPath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca-json.log",
	        "Name": "/old-k8s-version-106430",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-106430:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-106430",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca",
	                "LowerDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-106430",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-106430/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-106430",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-106430",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-106430",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5673fa198416ef088eb15414769b91962feb4c65414be69534607b820d44532f",
	            "SandboxKey": "/var/run/docker/netns/5673fa198416",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-106430": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:d1:1b:d4:0a:18",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eae036c06ea144341078058874d7c650e992adb447b26734be766752bb055131",
	                    "EndpointID": "b3ff62d077193eb3ab393ec519850009622eccacd6370cbb91a12a568aee818f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-106430",
	                        "7fdf9f94daa8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-106430 -n old-k8s-version-106430
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-106430 -n old-k8s-version-106430: exit status 2 (388.041352ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-106430 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-106430 logs -n 25: (1.622789206s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo crio config                                                                                                                                                                                                     │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p custom-flannel-307390                                                                                                                                                                                                                      │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p disable-driver-mounts-309397                                                                                                                                                                                                               │ disable-driver-mounts-309397 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-106430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p old-k8s-version-106430 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-106430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p no-preload-224845 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ stop    │ -p embed-certs-214580 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ stop    │ -p default-k8s-diff-port-927869 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p no-preload-224845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-214580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ image   │ old-k8s-version-106430 image list --format=json                                                                                                                                                                                               │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ pause   │ -p old-k8s-version-106430 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:43:35
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:43:35.627182  415823 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:43:35.627567  415823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:43:35.627581  415823 out.go:374] Setting ErrFile to fd 2...
	I1101 09:43:35.627588  415823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:43:35.627908  415823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:43:35.628542  415823 out.go:368] Setting JSON to false
	I1101 09:43:35.630224  415823 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5154,"bootTime":1761985062,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:43:35.630360  415823 start.go:143] virtualization: kvm guest
	I1101 09:43:35.632340  415823 out.go:179] * [default-k8s-diff-port-927869] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:43:35.633653  415823 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:43:35.633693  415823 notify.go:221] Checking for updates...
	I1101 09:43:35.635968  415823 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:43:35.637213  415823 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:43:35.638670  415823 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:43:35.640555  415823 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:43:35.641935  415823 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:43:35.643593  415823 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:43:35.644289  415823 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:43:35.677159  415823 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:43:35.677294  415823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:43:35.745107  415823 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-01 09:43:35.731214706 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:43:35.745203  415823 docker.go:319] overlay module found
	I1101 09:43:35.747304  415823 out.go:179] * Using the docker driver based on existing profile
	I1101 09:43:35.748842  415823 start.go:309] selected driver: docker
	I1101 09:43:35.748864  415823 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-927869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:43:35.749041  415823 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:43:35.749526  415823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:43:35.817781  415823 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-01 09:43:35.804958596 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:43:35.818730  415823 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:35.818807  415823 cni.go:84] Creating CNI manager for ""
	I1101 09:43:35.818878  415823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:43:35.818969  415823 start.go:353] cluster config:
	{Name:default-k8s-diff-port-927869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:43:35.822718  415823 out.go:179] * Starting "default-k8s-diff-port-927869" primary control-plane node in "default-k8s-diff-port-927869" cluster
	I1101 09:43:35.824019  415823 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:43:35.825673  415823 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:43:35.826873  415823 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:43:35.826950  415823 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:43:35.826974  415823 cache.go:59] Caching tarball of preloaded images
	I1101 09:43:35.826999  415823 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:43:35.827081  415823 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:43:35.827098  415823 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:43:35.827210  415823 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/config.json ...
	I1101 09:43:35.853395  415823 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:43:35.853418  415823 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:43:35.853434  415823 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:43:35.853466  415823 start.go:360] acquireMachinesLock for default-k8s-diff-port-927869: {Name:mk1d147ba61fa7b0d79d77d5ddb1fccc76bfa8fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:43:35.853527  415823 start.go:364] duration metric: took 41.392µs to acquireMachinesLock for "default-k8s-diff-port-927869"
	I1101 09:43:35.853544  415823 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:43:35.853551  415823 fix.go:54] fixHost starting: 
	I1101 09:43:35.853792  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:35.877354  415823 fix.go:112] recreateIfNeeded on default-k8s-diff-port-927869: state=Stopped err=<nil>
	W1101 09:43:35.877393  415823 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:43:35.896303  406120 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-106430" is "Ready"
	I1101 09:43:35.896339  406120 pod_ready.go:86] duration metric: took 399.027625ms for pod "kube-scheduler-old-k8s-version-106430" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:35.896358  406120 pod_ready.go:40] duration metric: took 34.410206491s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:35.960159  406120 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 09:43:35.962059  406120 out.go:203] 
	W1101 09:43:35.963703  406120 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 09:43:35.965007  406120 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 09:43:35.966481  406120 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-106430" cluster and "default" namespace by default
	W1101 09:43:34.566565  412381 pod_ready.go:104] pod "coredns-66bc5c9577-8qn69" is not "Ready", error: node "no-preload-224845" hosting pod "coredns-66bc5c9577-8qn69" is not "Ready" (will retry)
	W1101 09:43:36.567440  412381 pod_ready.go:104] pod "coredns-66bc5c9577-8qn69" is not "Ready", error: node "no-preload-224845" hosting pod "coredns-66bc5c9577-8qn69" is not "Ready" (will retry)
	W1101 09:43:39.066180  412381 pod_ready.go:104] pod "coredns-66bc5c9577-8qn69" is not "Ready", error: node "no-preload-224845" hosting pod "coredns-66bc5c9577-8qn69" is not "Ready" (will retry)
	I1101 09:43:34.992539  415212 out.go:252] * Restarting existing docker container for "embed-certs-214580" ...
	I1101 09:43:34.992640  415212 cli_runner.go:164] Run: docker start embed-certs-214580
	I1101 09:43:35.333517  415212 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:43:35.358071  415212 kic.go:430] container "embed-certs-214580" state is running.
	I1101 09:43:35.358543  415212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-214580
	I1101 09:43:35.380936  415212 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/config.json ...
	I1101 09:43:35.381273  415212 machine.go:94] provisionDockerMachine start ...
	I1101 09:43:35.381367  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:35.406922  415212 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:35.407325  415212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 09:43:35.407339  415212 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:43:35.408186  415212 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48166->127.0.0.1:33118: read: connection reset by peer
	I1101 09:43:38.551979  415212 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-214580
	
	I1101 09:43:38.552003  415212 ubuntu.go:182] provisioning hostname "embed-certs-214580"
	I1101 09:43:38.552057  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:38.571729  415212 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:38.572053  415212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 09:43:38.572073  415212 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-214580 && echo "embed-certs-214580" | sudo tee /etc/hostname
	I1101 09:43:38.724394  415212 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-214580
	
	I1101 09:43:38.724486  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:38.743301  415212 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:38.743523  415212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 09:43:38.743612  415212 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-214580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-214580/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-214580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:43:38.887278  415212 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:43:38.887308  415212 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:43:38.887328  415212 ubuntu.go:190] setting up certificates
	I1101 09:43:38.887341  415212 provision.go:84] configureAuth start
	I1101 09:43:38.887393  415212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-214580
	I1101 09:43:38.906804  415212 provision.go:143] copyHostCerts
	I1101 09:43:38.906876  415212 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:43:38.906902  415212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:43:38.907006  415212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:43:38.907108  415212 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:43:38.907118  415212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:43:38.907146  415212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:43:38.907200  415212 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:43:38.907207  415212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:43:38.907228  415212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:43:38.907277  415212 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.embed-certs-214580 san=[127.0.0.1 192.168.94.2 embed-certs-214580 localhost minikube]
	I1101 09:43:39.083797  415212 provision.go:177] copyRemoteCerts
	I1101 09:43:39.083863  415212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:43:39.083904  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.103049  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:39.204772  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:43:39.222860  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 09:43:39.241472  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:43:39.259144  415212 provision.go:87] duration metric: took 371.788271ms to configureAuth
	I1101 09:43:39.259175  415212 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:43:39.259364  415212 config.go:182] Loaded profile config "embed-certs-214580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:43:39.259515  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.279273  415212 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:39.279490  415212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 09:43:39.279504  415212 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:43:39.604874  415212 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:43:39.604902  415212 machine.go:97] duration metric: took 4.223609305s to provisionDockerMachine
	I1101 09:43:39.604947  415212 start.go:293] postStartSetup for "embed-certs-214580" (driver="docker")
	I1101 09:43:39.604961  415212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:43:39.605027  415212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:43:39.605107  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.628231  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:39.731278  415212 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:43:39.735345  415212 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:43:39.735373  415212 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:43:39.735388  415212 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:43:39.735445  415212 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:43:39.735540  415212 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:43:39.735639  415212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:43:39.744974  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:43:35.878961  415823 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-927869" ...
	I1101 09:43:35.879034  415823 cli_runner.go:164] Run: docker start default-k8s-diff-port-927869
	I1101 09:43:36.212664  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:36.235772  415823 kic.go:430] container "default-k8s-diff-port-927869" state is running.
	I1101 09:43:36.236319  415823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-927869
	I1101 09:43:36.260514  415823 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/config.json ...
	I1101 09:43:36.260821  415823 machine.go:94] provisionDockerMachine start ...
	I1101 09:43:36.260946  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:36.284553  415823 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:36.284868  415823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1101 09:43:36.284895  415823 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:43:36.285631  415823 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42336->127.0.0.1:33123: read: connection reset by peer
	I1101 09:43:39.434368  415823 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-927869
	
	I1101 09:43:39.434405  415823 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-927869"
	I1101 09:43:39.434476  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:39.457587  415823 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:39.457862  415823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1101 09:43:39.457879  415823 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-927869 && echo "default-k8s-diff-port-927869" | sudo tee /etc/hostname
	I1101 09:43:39.616291  415823 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-927869
	
	I1101 09:43:39.616378  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:39.637043  415823 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:39.637269  415823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1101 09:43:39.637299  415823 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-927869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-927869/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-927869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:43:39.781139  415823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:43:39.781174  415823 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:43:39.781200  415823 ubuntu.go:190] setting up certificates
	I1101 09:43:39.781212  415823 provision.go:84] configureAuth start
	I1101 09:43:39.781269  415823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-927869
	I1101 09:43:39.803384  415823 provision.go:143] copyHostCerts
	I1101 09:43:39.803440  415823 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:43:39.803457  415823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:43:39.803528  415823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:43:39.803656  415823 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:43:39.803668  415823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:43:39.803699  415823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:43:39.803758  415823 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:43:39.803765  415823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:43:39.803787  415823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:43:39.803838  415823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-927869 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-927869 localhost minikube]
	I1101 09:43:39.824527  415823 provision.go:177] copyRemoteCerts
	I1101 09:43:39.824583  415823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:43:39.824621  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:39.845103  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:39.951417  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:43:39.972950  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 09:43:39.992991  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:43:40.013279  415823 provision.go:87] duration metric: took 232.051772ms to configureAuth
	I1101 09:43:40.013313  415823 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:43:40.013496  415823 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:43:40.013631  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:40.032984  415823 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:40.033233  415823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1101 09:43:40.033254  415823 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:43:40.480951  415823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:43:40.480977  415823 machine.go:97] duration metric: took 4.220138355s to provisionDockerMachine
	I1101 09:43:40.480999  415823 start.go:293] postStartSetup for "default-k8s-diff-port-927869" (driver="docker")
	I1101 09:43:40.481011  415823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:43:40.481084  415823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:43:40.481147  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:40.509215  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:39.566268  412381 pod_ready.go:94] pod "coredns-66bc5c9577-8qn69" is "Ready"
	I1101 09:43:39.566303  412381 pod_ready.go:86] duration metric: took 9.00594564s for pod "coredns-66bc5c9577-8qn69" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:39.569003  412381 pod_ready.go:83] waiting for pod "etcd-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:39.573960  412381 pod_ready.go:94] pod "etcd-no-preload-224845" is "Ready"
	I1101 09:43:39.573994  412381 pod_ready.go:86] duration metric: took 4.963685ms for pod "etcd-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:39.576239  412381 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:39.582677  412381 pod_ready.go:94] pod "kube-apiserver-no-preload-224845" is "Ready"
	I1101 09:43:39.582712  412381 pod_ready.go:86] duration metric: took 6.44298ms for pod "kube-apiserver-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:39.585728  412381 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:40.165412  412381 pod_ready.go:94] pod "kube-controller-manager-no-preload-224845" is "Ready"
	I1101 09:43:40.165446  412381 pod_ready.go:86] duration metric: took 579.690538ms for pod "kube-controller-manager-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:40.365706  412381 pod_ready.go:83] waiting for pod "kube-proxy-f2f64" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:40.765375  412381 pod_ready.go:94] pod "kube-proxy-f2f64" is "Ready"
	I1101 09:43:40.765408  412381 pod_ready.go:86] duration metric: took 399.669543ms for pod "kube-proxy-f2f64" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:40.964829  412381 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:41.364446  412381 pod_ready.go:94] pod "kube-scheduler-no-preload-224845" is "Ready"
	I1101 09:43:41.364473  412381 pod_ready.go:86] duration metric: took 399.612916ms for pod "kube-scheduler-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:41.364485  412381 pod_ready.go:40] duration metric: took 10.80753384s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:41.419363  412381 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:43:41.421638  412381 out.go:179] * Done! kubectl is now configured to use "no-preload-224845" cluster and "default" namespace by default
	I1101 09:43:39.768818  415212 start.go:296] duration metric: took 163.845698ms for postStartSetup
	I1101 09:43:39.768933  415212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:43:39.768992  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.791082  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:39.894971  415212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:43:39.901509  415212 fix.go:56] duration metric: took 4.931458871s for fixHost
	I1101 09:43:39.901542  415212 start.go:83] releasing machines lock for "embed-certs-214580", held for 4.931512794s
	I1101 09:43:39.901616  415212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-214580
	I1101 09:43:39.921567  415212 ssh_runner.go:195] Run: cat /version.json
	I1101 09:43:39.921615  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.921674  415212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:43:39.921747  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.941038  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:39.941475  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:40.041932  415212 ssh_runner.go:195] Run: systemctl --version
	I1101 09:43:40.132078  415212 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:43:40.184645  415212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:43:40.191562  415212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:43:40.191638  415212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:43:40.203528  415212 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:43:40.203560  415212 start.go:496] detecting cgroup driver to use...
	I1101 09:43:40.203605  415212 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:43:40.203657  415212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:43:40.223845  415212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:43:40.242335  415212 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:43:40.242401  415212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:43:40.264507  415212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:43:40.283831  415212 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:43:40.410011  415212 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:43:40.544411  415212 docker.go:234] disabling docker service ...
	I1101 09:43:40.544482  415212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:43:40.565293  415212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:43:40.584761  415212 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:43:40.708564  415212 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:43:40.824895  415212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:43:40.843985  415212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:43:40.867494  415212 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:43:40.867577  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.881802  415212 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:43:40.881867  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.895576  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.909540  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.921684  415212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:43:40.934221  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.945827  415212 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.958115  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.971161  415212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:43:40.982640  415212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:43:40.993967  415212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:41.112164  415212 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:43:41.439378  415212 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:43:41.439449  415212 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:43:41.445073  415212 start.go:564] Will wait 60s for crictl version
	I1101 09:43:41.445136  415212 ssh_runner.go:195] Run: which crictl
	I1101 09:43:41.450008  415212 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:43:41.484874  415212 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:43:41.485049  415212 ssh_runner.go:195] Run: crio --version
	I1101 09:43:41.531742  415212 ssh_runner.go:195] Run: crio --version
	I1101 09:43:41.578974  415212 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:43:40.634212  415823 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:43:40.639840  415823 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:43:40.639874  415823 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:43:40.639887  415823 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:43:40.639958  415823 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:43:40.640050  415823 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:43:40.640168  415823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:43:40.651132  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:43:40.678266  415823 start.go:296] duration metric: took 197.249045ms for postStartSetup
	I1101 09:43:40.678359  415823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:43:40.678411  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:40.704400  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:40.815773  415823 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:43:40.822613  415823 fix.go:56] duration metric: took 4.969051921s for fixHost
	I1101 09:43:40.822656  415823 start.go:83] releasing machines lock for "default-k8s-diff-port-927869", held for 4.969117696s
	I1101 09:43:40.822729  415823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-927869
	I1101 09:43:40.845779  415823 ssh_runner.go:195] Run: cat /version.json
	I1101 09:43:40.845815  415823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:43:40.845853  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:40.845876  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:40.871388  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:40.872706  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:40.978902  415823 ssh_runner.go:195] Run: systemctl --version
	I1101 09:43:41.065431  415823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:43:41.115595  415823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:43:41.122147  415823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:43:41.122230  415823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:43:41.134352  415823 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:43:41.134385  415823 start.go:496] detecting cgroup driver to use...
	I1101 09:43:41.134423  415823 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:43:41.134492  415823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:43:41.158616  415823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:43:41.176786  415823 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:43:41.176851  415823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:43:41.199768  415823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:43:41.215252  415823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:43:41.304375  415823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:43:41.413545  415823 docker.go:234] disabling docker service ...
	I1101 09:43:41.413622  415823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:43:41.435152  415823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:43:41.452973  415823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:43:41.576705  415823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:43:41.715584  415823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:43:41.733364  415823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:43:41.759420  415823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:43:41.759492  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.772085  415823 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:43:41.772159  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.785349  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.799151  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.816053  415823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:43:41.829986  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.844748  415823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.858345  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.873325  415823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:43:41.883887  415823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:43:41.895017  415823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:42.019796  415823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:43:42.342412  415823 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:43:42.342479  415823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:43:42.347776  415823 start.go:564] Will wait 60s for crictl version
	I1101 09:43:42.347846  415823 ssh_runner.go:195] Run: which crictl
	I1101 09:43:42.352207  415823 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:43:42.382202  415823 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:43:42.382293  415823 ssh_runner.go:195] Run: crio --version
	I1101 09:43:42.417096  415823 ssh_runner.go:195] Run: crio --version
	I1101 09:43:42.454638  415823 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:43:41.581667  415212 cli_runner.go:164] Run: docker network inspect embed-certs-214580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:43:41.610857  415212 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 09:43:41.620391  415212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:43:41.643373  415212 kubeadm.go:884] updating cluster {Name:embed-certs-214580 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-214580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:43:41.643534  415212 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:43:41.643602  415212 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:43:41.687758  415212 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:43:41.687786  415212 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:43:41.687842  415212 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:43:41.722651  415212 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:43:41.722677  415212 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:43:41.722687  415212 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1101 09:43:41.722813  415212 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-214580 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-214580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:43:41.722896  415212 ssh_runner.go:195] Run: crio config
	I1101 09:43:41.811617  415212 cni.go:84] Creating CNI manager for ""
	I1101 09:43:41.811658  415212 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:43:41.811680  415212 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:43:41.811720  415212 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-214580 NodeName:embed-certs-214580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:43:41.811981  415212 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-214580"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:43:41.812063  415212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:43:41.824766  415212 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:43:41.824890  415212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:43:41.838302  415212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 09:43:41.857720  415212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:43:41.879145  415212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 09:43:41.900090  415212 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:43:41.906256  415212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:43:41.920630  415212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:42.046983  415212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:43:42.080483  415212 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580 for IP: 192.168.94.2
	I1101 09:43:42.080510  415212 certs.go:195] generating shared ca certs ...
	I1101 09:43:42.080531  415212 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:42.080742  415212 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:43:42.080808  415212 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:43:42.080825  415212 certs.go:257] generating profile certs ...
	I1101 09:43:42.080990  415212 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/client.key
	I1101 09:43:42.081060  415212 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/apiserver.key.db1fd92b
	I1101 09:43:42.081117  415212 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/proxy-client.key
	I1101 09:43:42.081245  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:43:42.081280  415212 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:43:42.081288  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:43:42.081317  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:43:42.081347  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:43:42.081372  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:43:42.081418  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:43:42.082388  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:43:42.109112  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:43:42.135779  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:43:42.176881  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:43:42.206272  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 09:43:42.231191  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:43:42.252071  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:43:42.274789  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:43:42.295541  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:43:42.318343  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:43:42.340360  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:43:42.362898  415212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:43:42.379029  415212 ssh_runner.go:195] Run: openssl version
	I1101 09:43:42.386635  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:43:42.397478  415212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:43:42.402466  415212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:43:42.402534  415212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:43:42.446825  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:43:42.456408  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:43:42.467494  415212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:42.472242  415212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:42.472302  415212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:42.518699  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:43:42.529010  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:43:42.539705  415212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:43:42.545344  415212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:43:42.545410  415212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:43:42.595979  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:43:42.606462  415212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:43:42.611805  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:43:42.663825  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:43:42.709714  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:43:42.770308  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:43:42.819227  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:43:42.857860  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:43:42.911665  415212 kubeadm.go:401] StartCluster: {Name:embed-certs-214580 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-214580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:43:42.911779  415212 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:43:42.911854  415212 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:43:42.952363  415212 cri.go:89] found id: "92f3e97dd2f0dfb87caf1169f059e045ee0bba63017d45c00279b75a85b35dd1"
	I1101 09:43:42.952395  415212 cri.go:89] found id: "900d5eaf90986af4e504a563b9e25cc937211d9280a58157d415269656f12fe8"
	I1101 09:43:42.952401  415212 cri.go:89] found id: "e96acc480b4e765646d24acecdd6b0e6543ce1a4ca7a4dfebb2ac4820f369fdc"
	I1101 09:43:42.952408  415212 cri.go:89] found id: "44596abc1851041c6cd33df427646452721a1d34c3147c32241a3f38e3af7c91"
	I1101 09:43:42.952412  415212 cri.go:89] found id: ""
	I1101 09:43:42.952464  415212 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:43:42.969631  415212 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:43:42Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:43:42.969717  415212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:43:42.983548  415212 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:43:42.983658  415212 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:43:42.983758  415212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:43:42.997583  415212 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:43:42.998501  415212 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-214580" does not appear in /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:43:42.999052  415212 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-104443/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-214580" cluster setting kubeconfig missing "embed-certs-214580" context setting]
	I1101 09:43:42.999985  415212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:43.002025  415212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:43:43.013495  415212 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1101 09:43:43.013531  415212 kubeadm.go:602] duration metric: took 29.849652ms to restartPrimaryControlPlane
	I1101 09:43:43.013542  415212 kubeadm.go:403] duration metric: took 101.889269ms to StartCluster
	I1101 09:43:43.013561  415212 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:43.013619  415212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:43:43.015698  415212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:43.016029  415212 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:43:43.016272  415212 config.go:182] Loaded profile config "embed-certs-214580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:43:43.016328  415212 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:43:43.016416  415212 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-214580"
	I1101 09:43:43.016433  415212 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-214580"
	W1101 09:43:43.016441  415212 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:43:43.016473  415212 host.go:66] Checking if "embed-certs-214580" exists ...
	I1101 09:43:43.016984  415212 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:43:43.017151  415212 addons.go:70] Setting default-storageclass=true in profile "embed-certs-214580"
	I1101 09:43:43.017179  415212 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-214580"
	I1101 09:43:43.017517  415212 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:43:43.017685  415212 addons.go:70] Setting dashboard=true in profile "embed-certs-214580"
	I1101 09:43:43.017704  415212 addons.go:239] Setting addon dashboard=true in "embed-certs-214580"
	W1101 09:43:43.017712  415212 addons.go:248] addon dashboard should already be in state true
	I1101 09:43:43.017744  415212 host.go:66] Checking if "embed-certs-214580" exists ...
	I1101 09:43:43.018423  415212 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:43:43.018695  415212 out.go:179] * Verifying Kubernetes components...
	I1101 09:43:43.020059  415212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:43.046025  415212 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:43:43.048430  415212 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:43:43.048634  415212 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:43:43.048682  415212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:43:43.048868  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:43.051226  415212 addons.go:239] Setting addon default-storageclass=true in "embed-certs-214580"
	W1101 09:43:43.051248  415212 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:43:43.051278  415212 host.go:66] Checking if "embed-certs-214580" exists ...
	I1101 09:43:43.051792  415212 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:43:42.456066  415823 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-927869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:43:42.479834  415823 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:43:42.484221  415823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:43:42.496301  415823 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-927869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:43:42.496413  415823 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:43:42.496464  415823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:43:42.535292  415823 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:43:42.535315  415823 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:43:42.535368  415823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:43:42.568658  415823 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:43:42.568680  415823 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:43:42.568688  415823 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1101 09:43:42.568801  415823 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-927869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:43:42.568865  415823 ssh_runner.go:195] Run: crio config
	I1101 09:43:42.626225  415823 cni.go:84] Creating CNI manager for ""
	I1101 09:43:42.626249  415823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:43:42.626272  415823 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:43:42.626304  415823 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-927869 NodeName:default-k8s-diff-port-927869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:43:42.626482  415823 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-927869"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:43:42.626559  415823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:43:42.639067  415823 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:43:42.639152  415823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:43:42.649488  415823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 09:43:42.665236  415823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:43:42.682988  415823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1101 09:43:42.700262  415823 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:43:42.704587  415823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:43:42.717216  415823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:42.844156  415823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:43:42.868823  415823 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869 for IP: 192.168.76.2
	I1101 09:43:42.868848  415823 certs.go:195] generating shared ca certs ...
	I1101 09:43:42.868877  415823 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:42.869058  415823 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:43:42.869108  415823 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:43:42.869125  415823 certs.go:257] generating profile certs ...
	I1101 09:43:42.869245  415823 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/client.key
	I1101 09:43:42.869319  415823 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/apiserver.key.e8df713d
	I1101 09:43:42.869371  415823 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/proxy-client.key
	I1101 09:43:42.869516  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:43:42.869555  415823 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:43:42.869569  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:43:42.869598  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:43:42.869623  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:43:42.869654  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:43:42.869702  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:43:42.870509  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:43:42.894454  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:43:42.920563  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:43:42.948379  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:43:42.977320  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 09:43:43.010941  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:43:43.047269  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:43:43.084220  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:43:43.118674  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:43:43.142840  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:43:43.166965  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:43:43.187907  415823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:43:43.204889  415823 ssh_runner.go:195] Run: openssl version
	I1101 09:43:43.213417  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:43:43.227714  415823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:43.232856  415823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:43.232964  415823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:43.287727  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:43:43.300089  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:43:43.312268  415823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:43:43.317714  415823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:43:43.317792  415823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:43:43.359788  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:43:43.371296  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:43:43.382180  415823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:43:43.388219  415823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:43:43.388286  415823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:43:43.444324  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:43:43.457075  415823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:43:43.463533  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:43:43.526185  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:43:43.597024  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:43:43.671106  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:43:43.766172  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:43:43.837327  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:43:43.900489  415823 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-927869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:43:43.900851  415823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:43:43.900981  415823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:43:43.958788  415823 cri.go:89] found id: "b878398c7931594e4cb6c3c4ed4781cb791a1b90248618542f29de81aedad9be"
	I1101 09:43:43.958888  415823 cri.go:89] found id: "ac46fd3af20eb400a0111854bc5d701bce1483809931f7f410906fe4c1c591b7"
	I1101 09:43:43.958894  415823 cri.go:89] found id: "a306bb6e82ea9a3bfdbe69350daead10910af77d87ca4cb0b5eb7021a3fb5b07"
	I1101 09:43:43.958900  415823 cri.go:89] found id: "ddfcd2d2a811ee1271d5babad45f6a9e1ea864dae01cc3517fe4f1fb4e156a62"
	I1101 09:43:43.958904  415823 cri.go:89] found id: ""
	I1101 09:43:43.959040  415823 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:43:43.982105  415823 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:43:43Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:43:43.982185  415823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:43:43.996172  415823 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:43:43.996201  415823 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:43:43.996254  415823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:43:44.010324  415823 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:43:44.011625  415823 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-927869" does not appear in /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:43:44.012961  415823 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-104443/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-927869" cluster setting kubeconfig missing "default-k8s-diff-port-927869" context setting]
	I1101 09:43:44.014941  415823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:44.017776  415823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:43:44.040452  415823 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 09:43:44.040564  415823 kubeadm.go:602] duration metric: took 44.35552ms to restartPrimaryControlPlane
	I1101 09:43:44.040586  415823 kubeadm.go:403] duration metric: took 140.107691ms to StartCluster
	I1101 09:43:44.040646  415823 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:44.040744  415823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:43:44.045279  415823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:44.045752  415823 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:43:44.046008  415823 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:43:44.046071  415823 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:43:44.046158  415823 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-927869"
	I1101 09:43:44.046177  415823 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-927869"
	W1101 09:43:44.046185  415823 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:43:44.046211  415823 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:43:44.046687  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:44.046756  415823 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-927869"
	I1101 09:43:44.046773  415823 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-927869"
	W1101 09:43:44.046780  415823 addons.go:248] addon dashboard should already be in state true
	I1101 09:43:44.046808  415823 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:43:44.047268  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:44.047526  415823 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-927869"
	I1101 09:43:44.047557  415823 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-927869"
	I1101 09:43:44.047702  415823 out.go:179] * Verifying Kubernetes components...
	I1101 09:43:44.047889  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:44.049185  415823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:44.087039  415823 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-927869"
	W1101 09:43:44.087065  415823 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:43:44.087095  415823 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:43:44.087598  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:44.100010  415823 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:43:44.100096  415823 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:43:44.101322  415823 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:43:44.101343  415823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:43:44.101407  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:44.101605  415823 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:43:43.052097  415212 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:43:43.052856  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:43:43.052927  415212 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:43:43.053018  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:43.084265  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:43.086134  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:43.094070  415212 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:43:43.094147  415212 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:43:43.094238  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:43.124062  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:43.200033  415212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:43:43.215782  415212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:43:43.221390  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:43:43.221419  415212 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:43:43.224610  415212 node_ready.go:35] waiting up to 6m0s for node "embed-certs-214580" to be "Ready" ...
	I1101 09:43:43.240575  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:43:43.240604  415212 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:43:43.245391  415212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:43:43.258163  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:43:43.258194  415212 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:43:43.279416  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:43:43.279443  415212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:43:43.304282  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:43:43.304308  415212 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:43:43.324970  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:43:43.325001  415212 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:43:43.342938  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:43:43.342967  415212 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:43:43.361686  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:43:43.361718  415212 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:43:43.379151  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:43:43.379183  415212 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:43:43.396317  415212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:43:44.102961  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:43:44.102987  415823 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:43:44.103054  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:44.125100  415823 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:43:44.125128  415823 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:43:44.125193  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:44.138658  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:44.149228  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:44.165428  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:44.321348  415823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:43:44.342100  415823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:43:44.343962  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:43:44.344046  415823 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:43:44.359269  415823 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-927869" to be "Ready" ...
	I1101 09:43:44.365623  415823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:43:44.374775  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:43:44.374804  415823 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:43:44.401472  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:43:44.401500  415823 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:43:44.436676  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:43:44.436702  415823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:43:44.463074  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:43:44.463127  415823 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:43:44.492310  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:43:44.492353  415823 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:43:44.514280  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:43:44.514307  415823 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:43:44.534725  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:43:44.534777  415823 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:43:44.553894  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:43:44.553942  415823 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:43:44.573149  415823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:43:45.265521  415212 node_ready.go:49] node "embed-certs-214580" is "Ready"
	I1101 09:43:45.265564  415212 node_ready.go:38] duration metric: took 2.04086826s for node "embed-certs-214580" to be "Ready" ...
	I1101 09:43:45.265581  415212 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:43:45.265684  415212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:43:46.054367  415212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.838544632s)
	I1101 09:43:46.054434  415212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.809016396s)
	I1101 09:43:46.054793  415212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.658434436s)
	I1101 09:43:46.054849  415212 api_server.go:72] duration metric: took 3.038641218s to wait for apiserver process to appear ...
	I1101 09:43:46.054889  415212 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:43:46.055001  415212 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:43:46.056593  415212 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-214580 addons enable metrics-server
	
	I1101 09:43:46.065878  415212 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:43:46.065908  415212 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:43:46.077849  415212 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:43:46.155800  415823 node_ready.go:49] node "default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:46.155836  415823 node_ready.go:38] duration metric: took 1.796509732s for node "default-k8s-diff-port-927869" to be "Ready" ...
	I1101 09:43:46.155857  415823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:43:46.155954  415823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:43:46.930168  415823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.588021536s)
	I1101 09:43:46.930283  415823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.564629507s)
	I1101 09:43:46.930560  415823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.357219376s)
	I1101 09:43:46.930894  415823 api_server.go:72] duration metric: took 2.885100182s to wait for apiserver process to appear ...
	I1101 09:43:46.930928  415823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:43:46.930950  415823 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 09:43:46.932868  415823 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-927869 addons enable metrics-server
	
	I1101 09:43:46.937131  415823 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:43:46.937158  415823 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:43:46.944467  415823 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:43:46.079151  415212 addons.go:515] duration metric: took 3.062819878s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:43:46.554999  415212 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:43:46.567759  415212 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 09:43:46.569176  415212 api_server.go:141] control plane version: v1.34.1
	I1101 09:43:46.569209  415212 api_server.go:131] duration metric: took 514.306569ms to wait for apiserver health ...
	I1101 09:43:46.569221  415212 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:43:46.576234  415212 system_pods.go:59] 8 kube-system pods found
	I1101 09:43:46.576290  415212 system_pods.go:61] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:46.576305  415212 system_pods.go:61] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:46.576315  415212 system_pods.go:61] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:43:46.576325  415212 system_pods.go:61] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:46.576333  415212 system_pods.go:61] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:46.576341  415212 system_pods.go:61] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:43:46.576363  415212 system_pods.go:61] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:46.576372  415212 system_pods.go:61] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:46.576382  415212 system_pods.go:74] duration metric: took 7.152695ms to wait for pod list to return data ...
	I1101 09:43:46.576392  415212 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:43:46.580626  415212 default_sa.go:45] found service account: "default"
	I1101 09:43:46.580655  415212 default_sa.go:55] duration metric: took 4.255003ms for default service account to be created ...
	I1101 09:43:46.580667  415212 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:43:46.585141  415212 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:46.585181  415212 system_pods.go:89] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:46.585193  415212 system_pods.go:89] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:46.585203  415212 system_pods.go:89] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:43:46.585217  415212 system_pods.go:89] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:46.585227  415212 system_pods.go:89] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:46.585240  415212 system_pods.go:89] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:43:46.585246  415212 system_pods.go:89] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:46.585255  415212 system_pods.go:89] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:46.585264  415212 system_pods.go:126] duration metric: took 4.591983ms to wait for k8s-apps to be running ...
	I1101 09:43:46.585273  415212 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:43:46.585316  415212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:46.613222  415212 system_svc.go:56] duration metric: took 27.933599ms WaitForService to wait for kubelet
	I1101 09:43:46.613257  415212 kubeadm.go:587] duration metric: took 3.597049999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:46.613327  415212 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:43:46.621371  415212 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:43:46.621417  415212 node_conditions.go:123] node cpu capacity is 8
	I1101 09:43:46.621435  415212 node_conditions.go:105] duration metric: took 8.094114ms to run NodePressure ...
	I1101 09:43:46.621451  415212 start.go:242] waiting for startup goroutines ...
	I1101 09:43:46.621460  415212 start.go:247] waiting for cluster config update ...
	I1101 09:43:46.621479  415212 start.go:256] writing updated cluster config ...
	I1101 09:43:46.621974  415212 ssh_runner.go:195] Run: rm -f paused
	I1101 09:43:46.627853  415212 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:46.640421  415212 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cmnj8" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:43:48.648506  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	I1101 09:43:46.945699  415823 addons.go:515] duration metric: took 2.899627908s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:43:47.431437  415823 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 09:43:47.439241  415823 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 09:43:47.447435  415823 api_server.go:141] control plane version: v1.34.1
	I1101 09:43:47.447493  415823 api_server.go:131] duration metric: took 516.556456ms to wait for apiserver health ...
	I1101 09:43:47.447505  415823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:43:47.460203  415823 system_pods.go:59] 8 kube-system pods found
	I1101 09:43:47.460331  415823 system_pods.go:61] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:47.460362  415823 system_pods.go:61] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:47.460383  415823 system_pods.go:61] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:43:47.460402  415823 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:47.460422  415823 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:47.460450  415823 system_pods.go:61] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:43:47.460469  415823 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:47.460488  415823 system_pods.go:61] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:47.460514  415823 system_pods.go:74] duration metric: took 13.000798ms to wait for pod list to return data ...
	I1101 09:43:47.460534  415823 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:43:47.469110  415823 default_sa.go:45] found service account: "default"
	I1101 09:43:47.469212  415823 default_sa.go:55] duration metric: took 8.656777ms for default service account to be created ...
	I1101 09:43:47.469244  415823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:43:47.477777  415823 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:47.478532  415823 system_pods.go:89] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:47.478588  415823 system_pods.go:89] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:47.478634  415823 system_pods.go:89] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:43:47.478654  415823 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:47.478668  415823 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:47.478731  415823 system_pods.go:89] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:43:47.478751  415823 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:47.478762  415823 system_pods.go:89] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:47.478773  415823 system_pods.go:126] duration metric: took 9.497735ms to wait for k8s-apps to be running ...
	I1101 09:43:47.478821  415823 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:43:47.478922  415823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:47.509551  415823 system_svc.go:56] duration metric: took 30.750037ms WaitForService to wait for kubelet
	I1101 09:43:47.509588  415823 kubeadm.go:587] duration metric: took 3.463795815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:47.509625  415823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:43:47.517785  415823 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:43:47.517882  415823 node_conditions.go:123] node cpu capacity is 8
	I1101 09:43:47.517948  415823 node_conditions.go:105] duration metric: took 8.315373ms to run NodePressure ...
	I1101 09:43:47.518009  415823 start.go:242] waiting for startup goroutines ...
	I1101 09:43:47.518035  415823 start.go:247] waiting for cluster config update ...
	I1101 09:43:47.518075  415823 start.go:256] writing updated cluster config ...
	I1101 09:43:47.518797  415823 ssh_runner.go:195] Run: rm -f paused
	I1101 09:43:47.527117  415823 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:47.533342  415823 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mlk9t" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:43:49.539378  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.380986362Z" level=info msg="Created container dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=a44c62cb-1c22-4a75-9f13-ba85773beb23 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.381705832Z" level=info msg="Starting container: dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac" id=90b1a359-fb1b-412b-abf9-55bb3ef36585 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.383707014Z" level=info msg="Started container" PID=1756 containerID=dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper id=90b1a359-fb1b-412b-abf9-55bb3ef36585 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0f2f4bde7b184c5f1dec55e106c292f6d533d135f5f6af3619500092a33fc0a
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.430235582Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=27c7dbad-ea92-4afe-9ff4-4dfb65e9f07d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.43314999Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=97cc101d-9704-4051-ae69-5312aad6c7f5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.436063306Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=019fc4c9-d450-4e99-9c11-132505634825 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.436177929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.443296456Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.443747783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.477149981Z" level=info msg="Created container 2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=019fc4c9-d450-4e99-9c11-132505634825 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.477781167Z" level=info msg="Starting container: 2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d" id=49e6706b-1091-43fa-9fcf-66fca4d134c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.479572609Z" level=info msg="Started container" PID=1767 containerID=2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper id=49e6706b-1091-43fa-9fcf-66fca4d134c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0f2f4bde7b184c5f1dec55e106c292f6d533d135f5f6af3619500092a33fc0a
	Nov 01 09:43:23 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:23.436082737Z" level=info msg="Removing container: dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac" id=afd4225f-eb3c-4040-a48d-66297fd6b7f0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:23 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:23.445519974Z" level=info msg="Removed container dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=afd4225f-eb3c-4040-a48d-66297fd6b7f0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.338882249Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4388d467-b988-462d-92d1-e966955015f4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.341069659Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=20c2ec18-4d4e-436b-8242-74fd77f36f46 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.342883217Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=328687f1-a772-4d0c-b0a1-e7e291004b76 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.34324312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.371415003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.372172358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.415032252Z" level=info msg="Created container c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=328687f1-a772-4d0c-b0a1-e7e291004b76 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.416868902Z" level=info msg="Starting container: c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276" id=8e15d6e3-7911-4df1-94f0-83d1b178c8d0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.419564584Z" level=info msg="Started container" PID=1801 containerID=c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper id=8e15d6e3-7911-4df1-94f0-83d1b178c8d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0f2f4bde7b184c5f1dec55e106c292f6d533d135f5f6af3619500092a33fc0a
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.488546752Z" level=info msg="Removing container: 2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d" id=f1840e70-4a99-45c2-b7b9-d3bf6032c046 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.50288311Z" level=info msg="Removed container 2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=f1840e70-4a99-45c2-b7b9-d3bf6032c046 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c56325247c9cf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   e0f2f4bde7b18       dashboard-metrics-scraper-5f989dc9cf-q9fgl       kubernetes-dashboard
	a64f928570c8e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   e0cadb03abc04       kubernetes-dashboard-8694d4445c-xc92m            kubernetes-dashboard
	d6d2f3a1c4ad0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Running             storage-provisioner         1                   bd96391bbf931       storage-provisioner                              kube-system
	18c4c4f61352d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   86e6b4273dfbd       busybox                                          default
	cf741a18f69cf       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   783ba29cd48c2       coredns-5dd5756b68-xh2rf                         kube-system
	2cdc9fdfdcd81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   bd96391bbf931       storage-provisioner                              kube-system
	4691c84ef3d07       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   11c136e5e6bf0       kube-proxy-zqs8f                                 kube-system
	fe67b21efed6f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   b4d9f74baaca7       kindnet-5v6hn                                    kube-system
	67383aa07ea5a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   465f0cd488238       kube-scheduler-old-k8s-version-106430            kube-system
	21c9e16bfcb6f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   cbf385e5afea3       kube-apiserver-old-k8s-version-106430            kube-system
	227f629919ddd       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   3d1cb6b0f215e       kube-controller-manager-old-k8s-version-106430   kube-system
	2879f0fdda15a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   f07e912331243       etcd-old-k8s-version-106430                      kube-system
	
	
	==> coredns [cf741a18f69cfdad6379c162ce83384fca951d4966d6fad0581fe96cf1e91908] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38017 - 10493 "HINFO IN 4377672268192665766.4869584298609322252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023873201s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-106430
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-106430
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=old-k8s-version-106430
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_41_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:41:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-106430
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:43:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:43:30 +0000   Sat, 01 Nov 2025 09:41:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:43:30 +0000   Sat, 01 Nov 2025 09:41:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:43:30 +0000   Sat, 01 Nov 2025 09:41:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:43:30 +0000   Sat, 01 Nov 2025 09:42:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-106430
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                576f40f7-444f-4b9e-a2cc-82322f1cc662
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-xh2rf                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-106430                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-5v6hn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-106430             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-106430    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-zqs8f                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-106430             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-q9fgl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-xc92m             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 2m6s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x9 over 2m6s)  kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-106430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x7 over 2m6s)  kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m1s                 kubelet          Node old-k8s-version-106430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m1s                 kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m1s                 kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-106430 event: Registered Node old-k8s-version-106430 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-106430 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node old-k8s-version-106430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-106430 event: Registered Node old-k8s-version-106430 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [2879f0fdda15ae5930efa2d324aedc5144c2f63543dc974f06fa3e3168b46588] <==
	{"level":"info","ts":"2025-11-01T09:42:57.91674Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:42:57.916775Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:42:57.918833Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T09:42:57.918992Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-01T09:42:57.919051Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-01T09:42:57.919158Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T09:42:57.919208Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T09:42:59.109001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T09:42:59.109073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T09:42:59.109098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-01T09:42:59.109115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T09:42:59.109123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-01T09:42:59.10913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T09:42:59.109137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-01T09:42:59.110206Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-106430 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:42:59.110218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:42:59.110248Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:42:59.11044Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T09:42:59.110467Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T09:42:59.111585Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T09:42:59.111594Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-01T09:43:24.613345Z","caller":"traceutil/trace.go:171","msg":"trace[931700887] linearizableReadLoop","detail":"{readStateIndex:674; appliedIndex:673; }","duration":"119.721627ms","start":"2025-11-01T09:43:24.4936Z","end":"2025-11-01T09:43:24.613321Z","steps":["trace[931700887] 'read index received'  (duration: 27.854568ms)","trace[931700887] 'applied index is now lower than readState.Index'  (duration: 91.866211ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:43:24.613426Z","caller":"traceutil/trace.go:171","msg":"trace[1894147858] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"162.069439ms","start":"2025-11-01T09:43:24.451332Z","end":"2025-11-01T09:43:24.613402Z","steps":["trace[1894147858] 'process raft request'  (duration: 70.171508ms)","trace[1894147858] 'compare'  (duration: 91.720847ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:43:24.613503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.90451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-xh2rf\" ","response":"range_response_count:1 size:4992"}
	{"level":"info","ts":"2025-11-01T09:43:24.613556Z","caller":"traceutil/trace.go:171","msg":"trace[1153651909] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-xh2rf; range_end:; response_count:1; response_revision:646; }","duration":"119.978104ms","start":"2025-11-01T09:43:24.493564Z","end":"2025-11-01T09:43:24.613542Z","steps":["trace[1153651909] 'agreement among raft nodes before linearized reading'  (duration: 119.85112ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:43:51 up  1:26,  0 user,  load average: 5.06, 4.57, 2.95
	Linux old-k8s-version-106430 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fe67b21efed6f525df5544423bd5e06bbcd9f87fc9fcb3d89a75da908e8b778a] <==
	I1101 09:43:00.984500       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:43:01.003315       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:43:01.003512       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:43:01.003529       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:43:01.003561       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:43:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:43:01.283671       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:43:01.283706       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:43:01.283718       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:43:01.303317       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:43:01.680447       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:43:01.680482       1 metrics.go:72] Registering metrics
	I1101 09:43:01.680545       1 controller.go:711] "Syncing nftables rules"
	I1101 09:43:11.283653       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:43:11.283760       1 main.go:301] handling current node
	I1101 09:43:21.285453       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:43:21.285507       1 main.go:301] handling current node
	I1101 09:43:31.283900       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:43:31.283975       1 main.go:301] handling current node
	I1101 09:43:41.283425       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:43:41.283464       1 main.go:301] handling current node
	I1101 09:43:51.285487       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:43:51.285541       1 main.go:301] handling current node
	
	
	==> kube-apiserver [21c9e16bfcb6f8965fbdbbf8b9f68b535b2252e3a9d58fe71811900f43d0178a] <==
	I1101 09:43:00.182258       1 aggregator.go:166] initial CRD sync complete...
	I1101 09:43:00.182294       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 09:43:00.182317       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:43:00.196084       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 09:43:00.244756       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 09:43:00.245131       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 09:43:00.245201       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:43:00.245968       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 09:43:00.254981       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 09:43:00.255175       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 09:43:00.279978       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:43:00.290039       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:43:01.151092       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:43:01.267974       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 09:43:01.305837       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 09:43:01.331694       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:43:01.345590       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:43:01.354876       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 09:43:01.412843       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.71.114"}
	I1101 09:43:01.436710       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.242.166"}
	I1101 09:43:13.251650       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:43:13.251703       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:43:13.651527       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 09:43:13.651528       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 09:43:13.702159       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [227f629919dddfb2b5ef168af9cb9b28faa37ce01740e96b97f11cdff132e1a4] <==
	I1101 09:43:13.506320       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="291.465168ms"
	I1101 09:43:13.506457       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.504µs"
	I1101 09:43:13.705842       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1101 09:43:13.707670       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1101 09:43:13.717548       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-xc92m"
	I1101 09:43:13.717606       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-q9fgl"
	I1101 09:43:13.722574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.391188ms"
	I1101 09:43:13.725150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.713216ms"
	I1101 09:43:13.731472       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.213281ms"
	I1101 09:43:13.731701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="122.084µs"
	I1101 09:43:13.737978       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.345453ms"
	I1101 09:43:13.738070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.579µs"
	I1101 09:43:13.746599       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.836µs"
	I1101 09:43:13.770114       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:43:13.775481       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:43:13.775516       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 09:43:18.443079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.336944ms"
	I1101 09:43:18.443373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="110.947µs"
	I1101 09:43:22.441831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.26µs"
	I1101 09:43:23.447147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.578µs"
	I1101 09:43:24.614980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="132.898µs"
	I1101 09:43:34.197649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.366277ms"
	I1101 09:43:34.197884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="170.426µs"
	I1101 09:43:40.509811       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.347µs"
	I1101 09:43:44.046907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.937µs"
	
	
	==> kube-proxy [4691c84ef3d07ba57728ac09a4552d6f8bf0fcc54705555278513250908efe00] <==
	I1101 09:43:00.784230       1 server_others.go:69] "Using iptables proxy"
	I1101 09:43:00.801202       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1101 09:43:00.826568       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:43:00.829987       1 server_others.go:152] "Using iptables Proxier"
	I1101 09:43:00.830032       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 09:43:00.830041       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 09:43:00.830085       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 09:43:00.830429       1 server.go:846] "Version info" version="v1.28.0"
	I1101 09:43:00.830452       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:00.831352       1 config.go:97] "Starting endpoint slice config controller"
	I1101 09:43:00.831382       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 09:43:00.831426       1 config.go:188] "Starting service config controller"
	I1101 09:43:00.831432       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 09:43:00.831457       1 config.go:315] "Starting node config controller"
	I1101 09:43:00.831461       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 09:43:00.932052       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 09:43:00.932263       1 shared_informer.go:318] Caches are synced for node config
	I1101 09:43:00.932352       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [67383aa07ea5a571b5780306e02b652d4100444e7d3375f13add5b076ff05a91] <==
	I1101 09:42:58.192019       1 serving.go:348] Generated self-signed cert in-memory
	I1101 09:43:00.230004       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 09:43:00.230031       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:00.233991       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1101 09:43:00.234028       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1101 09:43:00.234098       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:43:00.234122       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 09:43:00.234095       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:00.234160       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 09:43:00.235034       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 09:43:00.235175       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 09:43:00.334546       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 09:43:00.334627       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 09:43:00.334686       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.723768     722 topology_manager.go:215] "Topology Admit Handler" podUID="79c2ef77-baca-4182-8bd9-a64e4379615f" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-xc92m"
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.724158     722 topology_manager.go:215] "Topology Admit Handler" podUID="e6e50343-6215-403c-859a-a0fca77e0e83" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-q9fgl"
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.858503     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/79c2ef77-baca-4182-8bd9-a64e4379615f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-xc92m\" (UID: \"79c2ef77-baca-4182-8bd9-a64e4379615f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xc92m"
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.858560     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cqtt\" (UniqueName: \"kubernetes.io/projected/79c2ef77-baca-4182-8bd9-a64e4379615f-kube-api-access-2cqtt\") pod \"kubernetes-dashboard-8694d4445c-xc92m\" (UID: \"79c2ef77-baca-4182-8bd9-a64e4379615f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xc92m"
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.858591     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e6e50343-6215-403c-859a-a0fca77e0e83-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-q9fgl\" (UID: \"e6e50343-6215-403c-859a-a0fca77e0e83\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl"
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.858689     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r84x\" (UniqueName: \"kubernetes.io/projected/e6e50343-6215-403c-859a-a0fca77e0e83-kube-api-access-7r84x\") pod \"dashboard-metrics-scraper-5f989dc9cf-q9fgl\" (UID: \"e6e50343-6215-403c-859a-a0fca77e0e83\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl"
	Nov 01 09:43:22 old-k8s-version-106430 kubelet[722]: I1101 09:43:22.429717     722 scope.go:117] "RemoveContainer" containerID="dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac"
	Nov 01 09:43:22 old-k8s-version-106430 kubelet[722]: I1101 09:43:22.441869     722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xc92m" podStartSLOduration=5.254453001 podCreationTimestamp="2025-11-01 09:43:13 +0000 UTC" firstStartedPulling="2025-11-01 09:43:14.049503488 +0000 UTC m=+16.808114186" lastFinishedPulling="2025-11-01 09:43:18.23685922 +0000 UTC m=+20.995469913" observedRunningTime="2025-11-01 09:43:18.430487445 +0000 UTC m=+21.189098157" watchObservedRunningTime="2025-11-01 09:43:22.441808728 +0000 UTC m=+25.200419440"
	Nov 01 09:43:23 old-k8s-version-106430 kubelet[722]: I1101 09:43:23.434498     722 scope.go:117] "RemoveContainer" containerID="dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac"
	Nov 01 09:43:23 old-k8s-version-106430 kubelet[722]: I1101 09:43:23.434657     722 scope.go:117] "RemoveContainer" containerID="2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d"
	Nov 01 09:43:23 old-k8s-version-106430 kubelet[722]: E1101 09:43:23.435047     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q9fgl_kubernetes-dashboard(e6e50343-6215-403c-859a-a0fca77e0e83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl" podUID="e6e50343-6215-403c-859a-a0fca77e0e83"
	Nov 01 09:43:24 old-k8s-version-106430 kubelet[722]: I1101 09:43:24.440172     722 scope.go:117] "RemoveContainer" containerID="2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d"
	Nov 01 09:43:24 old-k8s-version-106430 kubelet[722]: E1101 09:43:24.440585     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q9fgl_kubernetes-dashboard(e6e50343-6215-403c-859a-a0fca77e0e83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl" podUID="e6e50343-6215-403c-859a-a0fca77e0e83"
	Nov 01 09:43:25 old-k8s-version-106430 kubelet[722]: I1101 09:43:25.441977     722 scope.go:117] "RemoveContainer" containerID="2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d"
	Nov 01 09:43:25 old-k8s-version-106430 kubelet[722]: E1101 09:43:25.442223     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q9fgl_kubernetes-dashboard(e6e50343-6215-403c-859a-a0fca77e0e83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl" podUID="e6e50343-6215-403c-859a-a0fca77e0e83"
	Nov 01 09:43:40 old-k8s-version-106430 kubelet[722]: I1101 09:43:40.337983     722 scope.go:117] "RemoveContainer" containerID="2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d"
	Nov 01 09:43:40 old-k8s-version-106430 kubelet[722]: I1101 09:43:40.485654     722 scope.go:117] "RemoveContainer" containerID="2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d"
	Nov 01 09:43:40 old-k8s-version-106430 kubelet[722]: I1101 09:43:40.486142     722 scope.go:117] "RemoveContainer" containerID="c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276"
	Nov 01 09:43:40 old-k8s-version-106430 kubelet[722]: E1101 09:43:40.487791     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q9fgl_kubernetes-dashboard(e6e50343-6215-403c-859a-a0fca77e0e83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl" podUID="e6e50343-6215-403c-859a-a0fca77e0e83"
	Nov 01 09:43:44 old-k8s-version-106430 kubelet[722]: I1101 09:43:44.027719     722 scope.go:117] "RemoveContainer" containerID="c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276"
	Nov 01 09:43:44 old-k8s-version-106430 kubelet[722]: E1101 09:43:44.028099     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q9fgl_kubernetes-dashboard(e6e50343-6215-403c-859a-a0fca77e0e83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl" podUID="e6e50343-6215-403c-859a-a0fca77e0e83"
	Nov 01 09:43:48 old-k8s-version-106430 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:43:48 old-k8s-version-106430 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:43:48 old-k8s-version-106430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:43:48 old-k8s-version-106430 systemd[1]: kubelet.service: Consumed 1.635s CPU time.
	
	
	==> kubernetes-dashboard [a64f928570c8e93d7275efca3d34ba9452ed83d5461da05e9ccb47d00976bc06] <==
	2025/11/01 09:43:18 Using namespace: kubernetes-dashboard
	2025/11/01 09:43:18 Using in-cluster config to connect to apiserver
	2025/11/01 09:43:18 Using secret token for csrf signing
	2025/11/01 09:43:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:43:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:43:18 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 09:43:18 Generating JWE encryption key
	2025/11/01 09:43:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:43:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:43:18 Initializing JWE encryption key from synchronized object
	2025/11/01 09:43:18 Creating in-cluster Sidecar client
	2025/11/01 09:43:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:18 Serving insecurely on HTTP port: 9090
	2025/11/01 09:43:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:18 Starting overwatch
	
	
	==> storage-provisioner [2cdc9fdfdcd814051d3dd77cdf55c61477f757879bf74593ca0dd53e09115dbc] <==
	I1101 09:43:00.746410       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:43:00.748190       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d6d2f3a1c4ad0645664556baf4dc3811e4149eaae2198bcfc7acb38d3e3375d9] <==
	I1101 09:43:01.430811       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:43:01.440327       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:43:01.440416       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 09:43:18.844664       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:43:18.844821       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-106430_b56e0a47-aa2b-4b3a-8183-1a69727715b6!
	I1101 09:43:18.844806       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d7c9887-d56d-4587-80ec-07ecbd12d0c2", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-106430_b56e0a47-aa2b-4b3a-8183-1a69727715b6 became leader
	I1101 09:43:18.946037       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-106430_b56e0a47-aa2b-4b3a-8183-1a69727715b6!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-106430 -n old-k8s-version-106430
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-106430 -n old-k8s-version-106430: exit status 2 (468.246047ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-106430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-106430
helpers_test.go:243: (dbg) docker inspect old-k8s-version-106430:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca",
	        "Created": "2025-11-01T09:41:36.12631196Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 406560,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:42:51.008788031Z",
	            "FinishedAt": "2025-11-01T09:42:49.79183264Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/hostname",
	        "HostsPath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/hosts",
	        "LogPath": "/var/lib/docker/containers/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca/7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca-json.log",
	        "Name": "/old-k8s-version-106430",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-106430:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-106430",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7fdf9f94daa8085a9a0e7547fde67fa8a685f9b97f1eae0bfc6cf695235cb7ca",
	                "LowerDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ae726b176049849c1a9672ea5c13bb14a757363c1419eeddc22aa0c5e63aa5c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-106430",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-106430/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-106430",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-106430",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-106430",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5673fa198416ef088eb15414769b91962feb4c65414be69534607b820d44532f",
	            "SandboxKey": "/var/run/docker/netns/5673fa198416",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-106430": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:d1:1b:d4:0a:18",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eae036c06ea144341078058874d7c650e992adb447b26734be766752bb055131",
	                    "EndpointID": "b3ff62d077193eb3ab393ec519850009622eccacd6370cbb91a12a568aee818f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-106430",
	                        "7fdf9f94daa8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-106430 -n old-k8s-version-106430
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-106430 -n old-k8s-version-106430: exit status 2 (451.973997ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-106430 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-106430 logs -n 25: (1.68516502s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p custom-flannel-307390 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                   │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo systemctl cat crio --no-pager                                                                                                                                                                                   │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                         │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ ssh     │ -p custom-flannel-307390 sudo crio config                                                                                                                                                                                                     │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p custom-flannel-307390                                                                                                                                                                                                                      │ custom-flannel-307390        │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ delete  │ -p disable-driver-mounts-309397                                                                                                                                                                                                               │ disable-driver-mounts-309397 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-106430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p old-k8s-version-106430 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-106430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p no-preload-224845 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ stop    │ -p embed-certs-214580 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ stop    │ -p default-k8s-diff-port-927869 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p no-preload-224845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-214580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ image   │ old-k8s-version-106430 image list --format=json                                                                                                                                                                                               │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ pause   │ -p old-k8s-version-106430 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:43:35
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:43:35.627182  415823 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:43:35.627567  415823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:43:35.627581  415823 out.go:374] Setting ErrFile to fd 2...
	I1101 09:43:35.627588  415823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:43:35.627908  415823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:43:35.628542  415823 out.go:368] Setting JSON to false
	I1101 09:43:35.630224  415823 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5154,"bootTime":1761985062,"procs":397,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:43:35.630360  415823 start.go:143] virtualization: kvm guest
	I1101 09:43:35.632340  415823 out.go:179] * [default-k8s-diff-port-927869] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:43:35.633653  415823 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:43:35.633693  415823 notify.go:221] Checking for updates...
	I1101 09:43:35.635968  415823 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:43:35.637213  415823 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:43:35.638670  415823 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:43:35.640555  415823 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:43:35.641935  415823 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:43:35.643593  415823 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:43:35.644289  415823 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:43:35.677159  415823 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:43:35.677294  415823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:43:35.745107  415823 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-01 09:43:35.731214706 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:43:35.745203  415823 docker.go:319] overlay module found
	I1101 09:43:35.747304  415823 out.go:179] * Using the docker driver based on existing profile
	I1101 09:43:35.748842  415823 start.go:309] selected driver: docker
	I1101 09:43:35.748864  415823 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-927869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:43:35.749041  415823 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:43:35.749526  415823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:43:35.817781  415823 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-01 09:43:35.804958596 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:43:35.818730  415823 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:35.818807  415823 cni.go:84] Creating CNI manager for ""
	I1101 09:43:35.818878  415823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:43:35.818969  415823 start.go:353] cluster config:
	{Name:default-k8s-diff-port-927869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:43:35.822718  415823 out.go:179] * Starting "default-k8s-diff-port-927869" primary control-plane node in "default-k8s-diff-port-927869" cluster
	I1101 09:43:35.824019  415823 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:43:35.825673  415823 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:43:35.826873  415823 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:43:35.826950  415823 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:43:35.826974  415823 cache.go:59] Caching tarball of preloaded images
	I1101 09:43:35.826999  415823 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:43:35.827081  415823 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:43:35.827098  415823 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:43:35.827210  415823 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/config.json ...
	I1101 09:43:35.853395  415823 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:43:35.853418  415823 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:43:35.853434  415823 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:43:35.853466  415823 start.go:360] acquireMachinesLock for default-k8s-diff-port-927869: {Name:mk1d147ba61fa7b0d79d77d5ddb1fccc76bfa8fa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:43:35.853527  415823 start.go:364] duration metric: took 41.392µs to acquireMachinesLock for "default-k8s-diff-port-927869"
	I1101 09:43:35.853544  415823 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:43:35.853551  415823 fix.go:54] fixHost starting: 
	I1101 09:43:35.853792  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:35.877354  415823 fix.go:112] recreateIfNeeded on default-k8s-diff-port-927869: state=Stopped err=<nil>
	W1101 09:43:35.877393  415823 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:43:35.896303  406120 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-106430" is "Ready"
	I1101 09:43:35.896339  406120 pod_ready.go:86] duration metric: took 399.027625ms for pod "kube-scheduler-old-k8s-version-106430" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:35.896358  406120 pod_ready.go:40] duration metric: took 34.410206491s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:35.960159  406120 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1101 09:43:35.962059  406120 out.go:203] 
	W1101 09:43:35.963703  406120 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1101 09:43:35.965007  406120 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1101 09:43:35.966481  406120 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-106430" cluster and "default" namespace by default
	W1101 09:43:34.566565  412381 pod_ready.go:104] pod "coredns-66bc5c9577-8qn69" is not "Ready", error: node "no-preload-224845" hosting pod "coredns-66bc5c9577-8qn69" is not "Ready" (will retry)
	W1101 09:43:36.567440  412381 pod_ready.go:104] pod "coredns-66bc5c9577-8qn69" is not "Ready", error: node "no-preload-224845" hosting pod "coredns-66bc5c9577-8qn69" is not "Ready" (will retry)
	W1101 09:43:39.066180  412381 pod_ready.go:104] pod "coredns-66bc5c9577-8qn69" is not "Ready", error: node "no-preload-224845" hosting pod "coredns-66bc5c9577-8qn69" is not "Ready" (will retry)
	I1101 09:43:34.992539  415212 out.go:252] * Restarting existing docker container for "embed-certs-214580" ...
	I1101 09:43:34.992640  415212 cli_runner.go:164] Run: docker start embed-certs-214580
	I1101 09:43:35.333517  415212 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:43:35.358071  415212 kic.go:430] container "embed-certs-214580" state is running.
	I1101 09:43:35.358543  415212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-214580
	I1101 09:43:35.380936  415212 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/config.json ...
	I1101 09:43:35.381273  415212 machine.go:94] provisionDockerMachine start ...
	I1101 09:43:35.381367  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:35.406922  415212 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:35.407325  415212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 09:43:35.407339  415212 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:43:35.408186  415212 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48166->127.0.0.1:33118: read: connection reset by peer
	I1101 09:43:38.551979  415212 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-214580
	
	I1101 09:43:38.552003  415212 ubuntu.go:182] provisioning hostname "embed-certs-214580"
	I1101 09:43:38.552057  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:38.571729  415212 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:38.572053  415212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 09:43:38.572073  415212 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-214580 && echo "embed-certs-214580" | sudo tee /etc/hostname
	I1101 09:43:38.724394  415212 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-214580
	
	I1101 09:43:38.724486  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:38.743301  415212 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:38.743523  415212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 09:43:38.743612  415212 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-214580' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-214580/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-214580' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:43:38.887278  415212 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:43:38.887308  415212 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:43:38.887328  415212 ubuntu.go:190] setting up certificates
	I1101 09:43:38.887341  415212 provision.go:84] configureAuth start
	I1101 09:43:38.887393  415212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-214580
	I1101 09:43:38.906804  415212 provision.go:143] copyHostCerts
	I1101 09:43:38.906876  415212 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:43:38.906902  415212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:43:38.907006  415212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:43:38.907108  415212 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:43:38.907118  415212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:43:38.907146  415212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:43:38.907200  415212 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:43:38.907207  415212 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:43:38.907228  415212 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:43:38.907277  415212 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.embed-certs-214580 san=[127.0.0.1 192.168.94.2 embed-certs-214580 localhost minikube]
	I1101 09:43:39.083797  415212 provision.go:177] copyRemoteCerts
	I1101 09:43:39.083863  415212 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:43:39.083904  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.103049  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:39.204772  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:43:39.222860  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1101 09:43:39.241472  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1101 09:43:39.259144  415212 provision.go:87] duration metric: took 371.788271ms to configureAuth
	I1101 09:43:39.259175  415212 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:43:39.259364  415212 config.go:182] Loaded profile config "embed-certs-214580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:43:39.259515  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.279273  415212 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:39.279490  415212 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33118 <nil> <nil>}
	I1101 09:43:39.279504  415212 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:43:39.604874  415212 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:43:39.604902  415212 machine.go:97] duration metric: took 4.223609305s to provisionDockerMachine
	I1101 09:43:39.604947  415212 start.go:293] postStartSetup for "embed-certs-214580" (driver="docker")
	I1101 09:43:39.604961  415212 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:43:39.605027  415212 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:43:39.605107  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.628231  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:39.731278  415212 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:43:39.735345  415212 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:43:39.735373  415212 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:43:39.735388  415212 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:43:39.735445  415212 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:43:39.735540  415212 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:43:39.735639  415212 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:43:39.744974  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:43:35.878961  415823 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-927869" ...
	I1101 09:43:35.879034  415823 cli_runner.go:164] Run: docker start default-k8s-diff-port-927869
	I1101 09:43:36.212664  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:36.235772  415823 kic.go:430] container "default-k8s-diff-port-927869" state is running.
	I1101 09:43:36.236319  415823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-927869
	I1101 09:43:36.260514  415823 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/config.json ...
	I1101 09:43:36.260821  415823 machine.go:94] provisionDockerMachine start ...
	I1101 09:43:36.260946  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:36.284553  415823 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:36.284868  415823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1101 09:43:36.284895  415823 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:43:36.285631  415823 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42336->127.0.0.1:33123: read: connection reset by peer
	I1101 09:43:39.434368  415823 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-927869
	
	I1101 09:43:39.434405  415823 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-927869"
	I1101 09:43:39.434476  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:39.457587  415823 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:39.457862  415823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1101 09:43:39.457879  415823 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-927869 && echo "default-k8s-diff-port-927869" | sudo tee /etc/hostname
	I1101 09:43:39.616291  415823 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-927869
	
	I1101 09:43:39.616378  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:39.637043  415823 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:39.637269  415823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1101 09:43:39.637299  415823 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-927869' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-927869/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-927869' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:43:39.781139  415823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:43:39.781174  415823 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:43:39.781200  415823 ubuntu.go:190] setting up certificates
	I1101 09:43:39.781212  415823 provision.go:84] configureAuth start
	I1101 09:43:39.781269  415823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-927869
	I1101 09:43:39.803384  415823 provision.go:143] copyHostCerts
	I1101 09:43:39.803440  415823 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:43:39.803457  415823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:43:39.803528  415823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:43:39.803656  415823 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:43:39.803668  415823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:43:39.803699  415823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:43:39.803758  415823 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:43:39.803765  415823 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:43:39.803787  415823 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:43:39.803838  415823 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-927869 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-927869 localhost minikube]
	I1101 09:43:39.824527  415823 provision.go:177] copyRemoteCerts
	I1101 09:43:39.824583  415823 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:43:39.824621  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:39.845103  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:39.951417  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:43:39.972950  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1101 09:43:39.992991  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:43:40.013279  415823 provision.go:87] duration metric: took 232.051772ms to configureAuth
	I1101 09:43:40.013313  415823 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:43:40.013496  415823 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:43:40.013631  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:40.032984  415823 main.go:143] libmachine: Using SSH client type: native
	I1101 09:43:40.033233  415823 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1101 09:43:40.033254  415823 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:43:40.480951  415823 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:43:40.480977  415823 machine.go:97] duration metric: took 4.220138355s to provisionDockerMachine
	I1101 09:43:40.480999  415823 start.go:293] postStartSetup for "default-k8s-diff-port-927869" (driver="docker")
	I1101 09:43:40.481011  415823 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:43:40.481084  415823 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:43:40.481147  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:40.509215  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:39.566268  412381 pod_ready.go:94] pod "coredns-66bc5c9577-8qn69" is "Ready"
	I1101 09:43:39.566303  412381 pod_ready.go:86] duration metric: took 9.00594564s for pod "coredns-66bc5c9577-8qn69" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:39.569003  412381 pod_ready.go:83] waiting for pod "etcd-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:39.573960  412381 pod_ready.go:94] pod "etcd-no-preload-224845" is "Ready"
	I1101 09:43:39.573994  412381 pod_ready.go:86] duration metric: took 4.963685ms for pod "etcd-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:39.576239  412381 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:39.582677  412381 pod_ready.go:94] pod "kube-apiserver-no-preload-224845" is "Ready"
	I1101 09:43:39.582712  412381 pod_ready.go:86] duration metric: took 6.44298ms for pod "kube-apiserver-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:39.585728  412381 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:40.165412  412381 pod_ready.go:94] pod "kube-controller-manager-no-preload-224845" is "Ready"
	I1101 09:43:40.165446  412381 pod_ready.go:86] duration metric: took 579.690538ms for pod "kube-controller-manager-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:40.365706  412381 pod_ready.go:83] waiting for pod "kube-proxy-f2f64" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:40.765375  412381 pod_ready.go:94] pod "kube-proxy-f2f64" is "Ready"
	I1101 09:43:40.765408  412381 pod_ready.go:86] duration metric: took 399.669543ms for pod "kube-proxy-f2f64" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:40.964829  412381 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:41.364446  412381 pod_ready.go:94] pod "kube-scheduler-no-preload-224845" is "Ready"
	I1101 09:43:41.364473  412381 pod_ready.go:86] duration metric: took 399.612916ms for pod "kube-scheduler-no-preload-224845" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:43:41.364485  412381 pod_ready.go:40] duration metric: took 10.80753384s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:41.419363  412381 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:43:41.421638  412381 out.go:179] * Done! kubectl is now configured to use "no-preload-224845" cluster and "default" namespace by default
	I1101 09:43:39.768818  415212 start.go:296] duration metric: took 163.845698ms for postStartSetup
	I1101 09:43:39.768933  415212 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:43:39.768992  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.791082  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:39.894971  415212 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:43:39.901509  415212 fix.go:56] duration metric: took 4.931458871s for fixHost
	I1101 09:43:39.901542  415212 start.go:83] releasing machines lock for "embed-certs-214580", held for 4.931512794s
	I1101 09:43:39.901616  415212 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-214580
	I1101 09:43:39.921567  415212 ssh_runner.go:195] Run: cat /version.json
	I1101 09:43:39.921615  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.921674  415212 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:43:39.921747  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:39.941038  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:39.941475  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:40.041932  415212 ssh_runner.go:195] Run: systemctl --version
	I1101 09:43:40.132078  415212 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:43:40.184645  415212 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:43:40.191562  415212 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:43:40.191638  415212 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:43:40.203528  415212 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:43:40.203560  415212 start.go:496] detecting cgroup driver to use...
	I1101 09:43:40.203605  415212 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:43:40.203657  415212 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:43:40.223845  415212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:43:40.242335  415212 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:43:40.242401  415212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:43:40.264507  415212 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:43:40.283831  415212 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:43:40.410011  415212 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:43:40.544411  415212 docker.go:234] disabling docker service ...
	I1101 09:43:40.544482  415212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:43:40.565293  415212 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:43:40.584761  415212 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:43:40.708564  415212 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:43:40.824895  415212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:43:40.843985  415212 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:43:40.867494  415212 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:43:40.867577  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.881802  415212 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:43:40.881867  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.895576  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.909540  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.921684  415212 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:43:40.934221  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.945827  415212 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.958115  415212 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:40.971161  415212 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:43:40.982640  415212 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:43:40.993967  415212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:41.112164  415212 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:43:41.439378  415212 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:43:41.439449  415212 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:43:41.445073  415212 start.go:564] Will wait 60s for crictl version
	I1101 09:43:41.445136  415212 ssh_runner.go:195] Run: which crictl
	I1101 09:43:41.450008  415212 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:43:41.484874  415212 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:43:41.485049  415212 ssh_runner.go:195] Run: crio --version
	I1101 09:43:41.531742  415212 ssh_runner.go:195] Run: crio --version
	I1101 09:43:41.578974  415212 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:43:40.634212  415823 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:43:40.639840  415823 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:43:40.639874  415823 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:43:40.639887  415823 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:43:40.639958  415823 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:43:40.640050  415823 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:43:40.640168  415823 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:43:40.651132  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:43:40.678266  415823 start.go:296] duration metric: took 197.249045ms for postStartSetup
	I1101 09:43:40.678359  415823 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:43:40.678411  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:40.704400  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:40.815773  415823 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:43:40.822613  415823 fix.go:56] duration metric: took 4.969051921s for fixHost
	I1101 09:43:40.822656  415823 start.go:83] releasing machines lock for "default-k8s-diff-port-927869", held for 4.969117696s
	I1101 09:43:40.822729  415823 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-927869
	I1101 09:43:40.845779  415823 ssh_runner.go:195] Run: cat /version.json
	I1101 09:43:40.845815  415823 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:43:40.845853  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:40.845876  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:40.871388  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:40.872706  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:40.978902  415823 ssh_runner.go:195] Run: systemctl --version
	I1101 09:43:41.065431  415823 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:43:41.115595  415823 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:43:41.122147  415823 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:43:41.122230  415823 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:43:41.134352  415823 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:43:41.134385  415823 start.go:496] detecting cgroup driver to use...
	I1101 09:43:41.134423  415823 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:43:41.134492  415823 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:43:41.158616  415823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:43:41.176786  415823 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:43:41.176851  415823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:43:41.199768  415823 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:43:41.215252  415823 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:43:41.304375  415823 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:43:41.413545  415823 docker.go:234] disabling docker service ...
	I1101 09:43:41.413622  415823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:43:41.435152  415823 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:43:41.452973  415823 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:43:41.576705  415823 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:43:41.715584  415823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:43:41.733364  415823 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:43:41.759420  415823 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:43:41.759492  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.772085  415823 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:43:41.772159  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.785349  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.799151  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.816053  415823 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:43:41.829986  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.844748  415823 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.858345  415823 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:43:41.873325  415823 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:43:41.883887  415823 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:43:41.895017  415823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:42.019796  415823 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:43:42.342412  415823 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:43:42.342479  415823 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:43:42.347776  415823 start.go:564] Will wait 60s for crictl version
	I1101 09:43:42.347846  415823 ssh_runner.go:195] Run: which crictl
	I1101 09:43:42.352207  415823 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:43:42.382202  415823 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:43:42.382293  415823 ssh_runner.go:195] Run: crio --version
	I1101 09:43:42.417096  415823 ssh_runner.go:195] Run: crio --version
	I1101 09:43:42.454638  415823 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:43:41.581667  415212 cli_runner.go:164] Run: docker network inspect embed-certs-214580 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:43:41.610857  415212 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1101 09:43:41.620391  415212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:43:41.643373  415212 kubeadm.go:884] updating cluster {Name:embed-certs-214580 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-214580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:43:41.643534  415212 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:43:41.643602  415212 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:43:41.687758  415212 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:43:41.687786  415212 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:43:41.687842  415212 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:43:41.722651  415212 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:43:41.722677  415212 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:43:41.722687  415212 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1101 09:43:41.722813  415212 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-214580 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-214580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:43:41.722896  415212 ssh_runner.go:195] Run: crio config
	I1101 09:43:41.811617  415212 cni.go:84] Creating CNI manager for ""
	I1101 09:43:41.811658  415212 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:43:41.811680  415212 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:43:41.811720  415212 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-214580 NodeName:embed-certs-214580 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:43:41.811981  415212 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-214580"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:43:41.812063  415212 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:43:41.824766  415212 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:43:41.824890  415212 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:43:41.838302  415212 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 09:43:41.857720  415212 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:43:41.879145  415212 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 09:43:41.900090  415212 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:43:41.906256  415212 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:43:41.920630  415212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:42.046983  415212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:43:42.080483  415212 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580 for IP: 192.168.94.2
	I1101 09:43:42.080510  415212 certs.go:195] generating shared ca certs ...
	I1101 09:43:42.080531  415212 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:42.080742  415212 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:43:42.080808  415212 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:43:42.080825  415212 certs.go:257] generating profile certs ...
	I1101 09:43:42.080990  415212 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/client.key
	I1101 09:43:42.081060  415212 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/apiserver.key.db1fd92b
	I1101 09:43:42.081117  415212 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/proxy-client.key
	I1101 09:43:42.081245  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:43:42.081280  415212 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:43:42.081288  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:43:42.081317  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:43:42.081347  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:43:42.081372  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:43:42.081418  415212 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:43:42.082388  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:43:42.109112  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:43:42.135779  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:43:42.176881  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:43:42.206272  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1101 09:43:42.231191  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:43:42.252071  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:43:42.274789  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/embed-certs-214580/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:43:42.295541  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:43:42.318343  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:43:42.340360  415212 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:43:42.362898  415212 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:43:42.379029  415212 ssh_runner.go:195] Run: openssl version
	I1101 09:43:42.386635  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:43:42.397478  415212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:43:42.402466  415212 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:43:42.402534  415212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:43:42.446825  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:43:42.456408  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:43:42.467494  415212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:42.472242  415212 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:42.472302  415212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:42.518699  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:43:42.529010  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:43:42.539705  415212 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:43:42.545344  415212 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:43:42.545410  415212 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:43:42.595979  415212 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:43:42.606462  415212 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:43:42.611805  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:43:42.663825  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:43:42.709714  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:43:42.770308  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:43:42.819227  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:43:42.857860  415212 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:43:42.911665  415212 kubeadm.go:401] StartCluster: {Name:embed-certs-214580 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-214580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:43:42.911779  415212 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:43:42.911854  415212 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:43:42.952363  415212 cri.go:89] found id: "92f3e97dd2f0dfb87caf1169f059e045ee0bba63017d45c00279b75a85b35dd1"
	I1101 09:43:42.952395  415212 cri.go:89] found id: "900d5eaf90986af4e504a563b9e25cc937211d9280a58157d415269656f12fe8"
	I1101 09:43:42.952401  415212 cri.go:89] found id: "e96acc480b4e765646d24acecdd6b0e6543ce1a4ca7a4dfebb2ac4820f369fdc"
	I1101 09:43:42.952408  415212 cri.go:89] found id: "44596abc1851041c6cd33df427646452721a1d34c3147c32241a3f38e3af7c91"
	I1101 09:43:42.952412  415212 cri.go:89] found id: ""
	I1101 09:43:42.952464  415212 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:43:42.969631  415212 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:43:42Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:43:42.969717  415212 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:43:42.983548  415212 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:43:42.983658  415212 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:43:42.983758  415212 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:43:42.997583  415212 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:43:42.998501  415212 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-214580" does not appear in /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:43:42.999052  415212 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-104443/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-214580" cluster setting kubeconfig missing "embed-certs-214580" context setting]
	I1101 09:43:42.999985  415212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:43.002025  415212 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:43:43.013495  415212 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1101 09:43:43.013531  415212 kubeadm.go:602] duration metric: took 29.849652ms to restartPrimaryControlPlane
	I1101 09:43:43.013542  415212 kubeadm.go:403] duration metric: took 101.889269ms to StartCluster
	I1101 09:43:43.013561  415212 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:43.013619  415212 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:43:43.015698  415212 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:43.016029  415212 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:43:43.016272  415212 config.go:182] Loaded profile config "embed-certs-214580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:43:43.016328  415212 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:43:43.016416  415212 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-214580"
	I1101 09:43:43.016433  415212 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-214580"
	W1101 09:43:43.016441  415212 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:43:43.016473  415212 host.go:66] Checking if "embed-certs-214580" exists ...
	I1101 09:43:43.016984  415212 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:43:43.017151  415212 addons.go:70] Setting default-storageclass=true in profile "embed-certs-214580"
	I1101 09:43:43.017179  415212 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-214580"
	I1101 09:43:43.017517  415212 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:43:43.017685  415212 addons.go:70] Setting dashboard=true in profile "embed-certs-214580"
	I1101 09:43:43.017704  415212 addons.go:239] Setting addon dashboard=true in "embed-certs-214580"
	W1101 09:43:43.017712  415212 addons.go:248] addon dashboard should already be in state true
	I1101 09:43:43.017744  415212 host.go:66] Checking if "embed-certs-214580" exists ...
	I1101 09:43:43.018423  415212 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:43:43.018695  415212 out.go:179] * Verifying Kubernetes components...
	I1101 09:43:43.020059  415212 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:43.046025  415212 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:43:43.048430  415212 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:43:43.048634  415212 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:43:43.048682  415212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:43:43.048868  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:43.051226  415212 addons.go:239] Setting addon default-storageclass=true in "embed-certs-214580"
	W1101 09:43:43.051248  415212 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:43:43.051278  415212 host.go:66] Checking if "embed-certs-214580" exists ...
	I1101 09:43:43.051792  415212 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:43:42.456066  415823 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-927869 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:43:42.479834  415823 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1101 09:43:42.484221  415823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:43:42.496301  415823 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-927869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:43:42.496413  415823 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:43:42.496464  415823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:43:42.535292  415823 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:43:42.535315  415823 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:43:42.535368  415823 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:43:42.568658  415823 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:43:42.568680  415823 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:43:42.568688  415823 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1101 09:43:42.568801  415823 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-927869 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:43:42.568865  415823 ssh_runner.go:195] Run: crio config
	I1101 09:43:42.626225  415823 cni.go:84] Creating CNI manager for ""
	I1101 09:43:42.626249  415823 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:43:42.626272  415823 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1101 09:43:42.626304  415823 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-927869 NodeName:default-k8s-diff-port-927869 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:43:42.626482  415823 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-927869"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:43:42.626559  415823 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:43:42.639067  415823 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:43:42.639152  415823 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:43:42.649488  415823 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1101 09:43:42.665236  415823 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:43:42.682988  415823 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1101 09:43:42.700262  415823 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:43:42.704587  415823 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:43:42.717216  415823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:42.844156  415823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:43:42.868823  415823 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869 for IP: 192.168.76.2
	I1101 09:43:42.868848  415823 certs.go:195] generating shared ca certs ...
	I1101 09:43:42.868877  415823 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:42.869058  415823 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:43:42.869108  415823 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:43:42.869125  415823 certs.go:257] generating profile certs ...
	I1101 09:43:42.869245  415823 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/client.key
	I1101 09:43:42.869319  415823 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/apiserver.key.e8df713d
	I1101 09:43:42.869371  415823 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/proxy-client.key
	I1101 09:43:42.869516  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:43:42.869555  415823 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:43:42.869569  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:43:42.869598  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:43:42.869623  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:43:42.869654  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:43:42.869702  415823 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:43:42.870509  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:43:42.894454  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:43:42.920563  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:43:42.948379  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:43:42.977320  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1101 09:43:43.010941  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:43:43.047269  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:43:43.084220  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/default-k8s-diff-port-927869/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1101 09:43:43.118674  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:43:43.142840  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:43:43.166965  415823 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:43:43.187907  415823 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:43:43.204889  415823 ssh_runner.go:195] Run: openssl version
	I1101 09:43:43.213417  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:43:43.227714  415823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:43.232856  415823 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:43.232964  415823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:43:43.287727  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:43:43.300089  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:43:43.312268  415823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:43:43.317714  415823 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:43:43.317792  415823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:43:43.359788  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:43:43.371296  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:43:43.382180  415823 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:43:43.388219  415823 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:43:43.388286  415823 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:43:43.444324  415823 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:43:43.457075  415823 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:43:43.463533  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:43:43.526185  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:43:43.597024  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:43:43.671106  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:43:43.766172  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:43:43.837327  415823 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:43:43.900489  415823 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-927869 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-927869 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:43:43.900851  415823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:43:43.900981  415823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:43:43.958788  415823 cri.go:89] found id: "b878398c7931594e4cb6c3c4ed4781cb791a1b90248618542f29de81aedad9be"
	I1101 09:43:43.958888  415823 cri.go:89] found id: "ac46fd3af20eb400a0111854bc5d701bce1483809931f7f410906fe4c1c591b7"
	I1101 09:43:43.958894  415823 cri.go:89] found id: "a306bb6e82ea9a3bfdbe69350daead10910af77d87ca4cb0b5eb7021a3fb5b07"
	I1101 09:43:43.958900  415823 cri.go:89] found id: "ddfcd2d2a811ee1271d5babad45f6a9e1ea864dae01cc3517fe4f1fb4e156a62"
	I1101 09:43:43.958904  415823 cri.go:89] found id: ""
	I1101 09:43:43.959040  415823 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:43:43.982105  415823 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:43:43Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:43:43.982185  415823 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:43:43.996172  415823 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:43:43.996201  415823 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:43:43.996254  415823 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:43:44.010324  415823 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:43:44.011625  415823 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-927869" does not appear in /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:43:44.012961  415823 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-104443/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-927869" cluster setting kubeconfig missing "default-k8s-diff-port-927869" context setting]
	I1101 09:43:44.014941  415823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:44.017776  415823 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:43:44.040452  415823 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1101 09:43:44.040564  415823 kubeadm.go:602] duration metric: took 44.35552ms to restartPrimaryControlPlane
	I1101 09:43:44.040586  415823 kubeadm.go:403] duration metric: took 140.107691ms to StartCluster
	I1101 09:43:44.040646  415823 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:44.040744  415823 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:43:44.045279  415823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:43:44.045752  415823 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:43:44.046008  415823 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:43:44.046071  415823 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:43:44.046158  415823 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-927869"
	I1101 09:43:44.046177  415823 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-927869"
	W1101 09:43:44.046185  415823 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:43:44.046211  415823 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:43:44.046687  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:44.046756  415823 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-927869"
	I1101 09:43:44.046773  415823 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-927869"
	W1101 09:43:44.046780  415823 addons.go:248] addon dashboard should already be in state true
	I1101 09:43:44.046808  415823 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:43:44.047268  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:44.047526  415823 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-927869"
	I1101 09:43:44.047557  415823 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-927869"
	I1101 09:43:44.047702  415823 out.go:179] * Verifying Kubernetes components...
	I1101 09:43:44.047889  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:44.049185  415823 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:43:44.087039  415823 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-927869"
	W1101 09:43:44.087065  415823 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:43:44.087095  415823 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:43:44.087598  415823 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:43:44.100010  415823 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:43:44.100096  415823 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:43:44.101322  415823 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:43:44.101343  415823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:43:44.101407  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:44.101605  415823 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:43:43.052097  415212 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:43:43.052856  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:43:43.052927  415212 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:43:43.053018  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:43.084265  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:43.086134  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:43.094070  415212 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:43:43.094147  415212 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:43:43.094238  415212 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:43:43.124062  415212 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:43:43.200033  415212 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:43:43.215782  415212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:43:43.221390  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:43:43.221419  415212 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:43:43.224610  415212 node_ready.go:35] waiting up to 6m0s for node "embed-certs-214580" to be "Ready" ...
	I1101 09:43:43.240575  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:43:43.240604  415212 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:43:43.245391  415212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:43:43.258163  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:43:43.258194  415212 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:43:43.279416  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:43:43.279443  415212 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:43:43.304282  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:43:43.304308  415212 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:43:43.324970  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:43:43.325001  415212 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:43:43.342938  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:43:43.342967  415212 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:43:43.361686  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:43:43.361718  415212 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:43:43.379151  415212 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:43:43.379183  415212 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:43:43.396317  415212 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:43:44.102961  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:43:44.102987  415823 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:43:44.103054  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:44.125100  415823 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:43:44.125128  415823 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:43:44.125193  415823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:43:44.138658  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:44.149228  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:44.165428  415823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:43:44.321348  415823 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:43:44.342100  415823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:43:44.343962  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:43:44.344046  415823 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:43:44.359269  415823 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-927869" to be "Ready" ...
	I1101 09:43:44.365623  415823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:43:44.374775  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:43:44.374804  415823 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:43:44.401472  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:43:44.401500  415823 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:43:44.436676  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:43:44.436702  415823 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:43:44.463074  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:43:44.463127  415823 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:43:44.492310  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:43:44.492353  415823 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:43:44.514280  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:43:44.514307  415823 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:43:44.534725  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:43:44.534777  415823 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:43:44.553894  415823 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:43:44.553942  415823 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:43:44.573149  415823 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:43:45.265521  415212 node_ready.go:49] node "embed-certs-214580" is "Ready"
	I1101 09:43:45.265564  415212 node_ready.go:38] duration metric: took 2.04086826s for node "embed-certs-214580" to be "Ready" ...
	I1101 09:43:45.265581  415212 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:43:45.265684  415212 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:43:46.054367  415212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.838544632s)
	I1101 09:43:46.054434  415212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.809016396s)
	I1101 09:43:46.054793  415212 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.658434436s)
	I1101 09:43:46.054849  415212 api_server.go:72] duration metric: took 3.038641218s to wait for apiserver process to appear ...
	I1101 09:43:46.054889  415212 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:43:46.055001  415212 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:43:46.056593  415212 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-214580 addons enable metrics-server
	
	I1101 09:43:46.065878  415212 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:43:46.065908  415212 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:43:46.077849  415212 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:43:46.155800  415823 node_ready.go:49] node "default-k8s-diff-port-927869" is "Ready"
	I1101 09:43:46.155836  415823 node_ready.go:38] duration metric: took 1.796509732s for node "default-k8s-diff-port-927869" to be "Ready" ...
	I1101 09:43:46.155857  415823 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:43:46.155954  415823 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:43:46.930168  415823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.588021536s)
	I1101 09:43:46.930283  415823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.564629507s)
	I1101 09:43:46.930560  415823 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.357219376s)
	I1101 09:43:46.930894  415823 api_server.go:72] duration metric: took 2.885100182s to wait for apiserver process to appear ...
	I1101 09:43:46.930928  415823 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:43:46.930950  415823 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 09:43:46.932868  415823 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-927869 addons enable metrics-server
	
	I1101 09:43:46.937131  415823 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:43:46.937158  415823 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:43:46.944467  415823 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:43:46.079151  415212 addons.go:515] duration metric: took 3.062819878s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:43:46.554999  415212 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1101 09:43:46.567759  415212 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1101 09:43:46.569176  415212 api_server.go:141] control plane version: v1.34.1
	I1101 09:43:46.569209  415212 api_server.go:131] duration metric: took 514.306569ms to wait for apiserver health ...
	I1101 09:43:46.569221  415212 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:43:46.576234  415212 system_pods.go:59] 8 kube-system pods found
	I1101 09:43:46.576290  415212 system_pods.go:61] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:46.576305  415212 system_pods.go:61] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:46.576315  415212 system_pods.go:61] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:43:46.576325  415212 system_pods.go:61] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:46.576333  415212 system_pods.go:61] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:46.576341  415212 system_pods.go:61] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:43:46.576363  415212 system_pods.go:61] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:46.576372  415212 system_pods.go:61] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:46.576382  415212 system_pods.go:74] duration metric: took 7.152695ms to wait for pod list to return data ...
	I1101 09:43:46.576392  415212 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:43:46.580626  415212 default_sa.go:45] found service account: "default"
	I1101 09:43:46.580655  415212 default_sa.go:55] duration metric: took 4.255003ms for default service account to be created ...
	I1101 09:43:46.580667  415212 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:43:46.585141  415212 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:46.585181  415212 system_pods.go:89] "coredns-66bc5c9577-cmnj8" [7de64ad2-dad1-4aa9-aff7-af9733684465] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:46.585193  415212 system_pods.go:89] "etcd-embed-certs-214580" [3067d663-1fb6-40a5-a407-73de85ce4af8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:46.585203  415212 system_pods.go:89] "kindnet-v28lz" [d68725c8-8c77-4a60-801e-59385a165589] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:43:46.585217  415212 system_pods.go:89] "kube-apiserver-embed-certs-214580" [09218c1d-c2ad-4f9d-b2f7-16f2dc40a2c3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:46.585227  415212 system_pods.go:89] "kube-controller-manager-embed-certs-214580" [bf96ada1-b2b3-4aa2-8bf0-b6fc017c7516] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:46.585240  415212 system_pods.go:89] "kube-proxy-49j45" [234d7bd6-5336-4ec0-8d37-9e59105a6166] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:43:46.585246  415212 system_pods.go:89] "kube-scheduler-embed-certs-214580" [26199971-d49f-4722-89dc-fe5837bd4b52] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:46.585255  415212 system_pods.go:89] "storage-provisioner" [add6352a-7e5a-405a-96bb-cd63b7f4eb6a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:46.585264  415212 system_pods.go:126] duration metric: took 4.591983ms to wait for k8s-apps to be running ...
	I1101 09:43:46.585273  415212 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:43:46.585316  415212 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:46.613222  415212 system_svc.go:56] duration metric: took 27.933599ms WaitForService to wait for kubelet
	I1101 09:43:46.613257  415212 kubeadm.go:587] duration metric: took 3.597049999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:46.613327  415212 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:43:46.621371  415212 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:43:46.621417  415212 node_conditions.go:123] node cpu capacity is 8
	I1101 09:43:46.621435  415212 node_conditions.go:105] duration metric: took 8.094114ms to run NodePressure ...
	I1101 09:43:46.621451  415212 start.go:242] waiting for startup goroutines ...
	I1101 09:43:46.621460  415212 start.go:247] waiting for cluster config update ...
	I1101 09:43:46.621479  415212 start.go:256] writing updated cluster config ...
	I1101 09:43:46.621974  415212 ssh_runner.go:195] Run: rm -f paused
	I1101 09:43:46.627853  415212 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:46.640421  415212 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cmnj8" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:43:48.648506  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	I1101 09:43:46.945699  415823 addons.go:515] duration metric: took 2.899627908s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:43:47.431437  415823 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1101 09:43:47.439241  415823 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1101 09:43:47.447435  415823 api_server.go:141] control plane version: v1.34.1
	I1101 09:43:47.447493  415823 api_server.go:131] duration metric: took 516.556456ms to wait for apiserver health ...
	I1101 09:43:47.447505  415823 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:43:47.460203  415823 system_pods.go:59] 8 kube-system pods found
	I1101 09:43:47.460331  415823 system_pods.go:61] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:47.460362  415823 system_pods.go:61] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:47.460383  415823 system_pods.go:61] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:43:47.460402  415823 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:47.460422  415823 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:47.460450  415823 system_pods.go:61] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:43:47.460469  415823 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:47.460488  415823 system_pods.go:61] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:47.460514  415823 system_pods.go:74] duration metric: took 13.000798ms to wait for pod list to return data ...
	I1101 09:43:47.460534  415823 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:43:47.469110  415823 default_sa.go:45] found service account: "default"
	I1101 09:43:47.469212  415823 default_sa.go:55] duration metric: took 8.656777ms for default service account to be created ...
	I1101 09:43:47.469244  415823 system_pods.go:116] waiting for k8s-apps to be running ...
	I1101 09:43:47.477777  415823 system_pods.go:86] 8 kube-system pods found
	I1101 09:43:47.478532  415823 system_pods.go:89] "coredns-66bc5c9577-mlk9t" [500c8e66-5d34-41b1-b23f-fe5858986803] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1101 09:43:47.478588  415823 system_pods.go:89] "etcd-default-k8s-diff-port-927869" [f032e32a-9c58-414b-86be-6f904a774687] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:43:47.478634  415823 system_pods.go:89] "kindnet-g9zdl" [e8a5182c-c2b0-4b2b-a8cf-531baef0a83d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:43:47.478654  415823 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-927869" [b7f0612a-2a91-4367-98c1-02485923f817] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:43:47.478668  415823 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-927869" [6216be20-a99e-48d7-b09d-eb34b8af7519] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:43:47.478731  415823 system_pods.go:89] "kube-proxy-dszvg" [17bd8a33-3ad1-4195-8ff9-dd78085ab995] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:43:47.478751  415823 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-927869" [a05f3add-a5bd-4e38-93dd-0e6632a1a715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:43:47.478762  415823 system_pods.go:89] "storage-provisioner" [0a2ed6da-a87e-4c60-b4b0-2e5644c99652] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1101 09:43:47.478773  415823 system_pods.go:126] duration metric: took 9.497735ms to wait for k8s-apps to be running ...
	I1101 09:43:47.478821  415823 system_svc.go:44] waiting for kubelet service to be running ....
	I1101 09:43:47.478922  415823 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:43:47.509551  415823 system_svc.go:56] duration metric: took 30.750037ms WaitForService to wait for kubelet
	I1101 09:43:47.509588  415823 kubeadm.go:587] duration metric: took 3.463795815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1101 09:43:47.509625  415823 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:43:47.517785  415823 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:43:47.517882  415823 node_conditions.go:123] node cpu capacity is 8
	I1101 09:43:47.517948  415823 node_conditions.go:105] duration metric: took 8.315373ms to run NodePressure ...
	I1101 09:43:47.518009  415823 start.go:242] waiting for startup goroutines ...
	I1101 09:43:47.518035  415823 start.go:247] waiting for cluster config update ...
	I1101 09:43:47.518075  415823 start.go:256] writing updated cluster config ...
	I1101 09:43:47.518797  415823 ssh_runner.go:195] Run: rm -f paused
	I1101 09:43:47.527117  415823 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:43:47.533342  415823 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mlk9t" in "kube-system" namespace to be "Ready" or be gone ...
	W1101 09:43:49.539378  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.380986362Z" level=info msg="Created container dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=a44c62cb-1c22-4a75-9f13-ba85773beb23 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.381705832Z" level=info msg="Starting container: dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac" id=90b1a359-fb1b-412b-abf9-55bb3ef36585 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.383707014Z" level=info msg="Started container" PID=1756 containerID=dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper id=90b1a359-fb1b-412b-abf9-55bb3ef36585 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0f2f4bde7b184c5f1dec55e106c292f6d533d135f5f6af3619500092a33fc0a
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.430235582Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=27c7dbad-ea92-4afe-9ff4-4dfb65e9f07d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.43314999Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=97cc101d-9704-4051-ae69-5312aad6c7f5 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.436063306Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=019fc4c9-d450-4e99-9c11-132505634825 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.436177929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.443296456Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.443747783Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.477149981Z" level=info msg="Created container 2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=019fc4c9-d450-4e99-9c11-132505634825 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.477781167Z" level=info msg="Starting container: 2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d" id=49e6706b-1091-43fa-9fcf-66fca4d134c1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:22 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:22.479572609Z" level=info msg="Started container" PID=1767 containerID=2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper id=49e6706b-1091-43fa-9fcf-66fca4d134c1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0f2f4bde7b184c5f1dec55e106c292f6d533d135f5f6af3619500092a33fc0a
	Nov 01 09:43:23 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:23.436082737Z" level=info msg="Removing container: dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac" id=afd4225f-eb3c-4040-a48d-66297fd6b7f0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:23 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:23.445519974Z" level=info msg="Removed container dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=afd4225f-eb3c-4040-a48d-66297fd6b7f0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.338882249Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4388d467-b988-462d-92d1-e966955015f4 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.341069659Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=20c2ec18-4d4e-436b-8242-74fd77f36f46 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.342883217Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=328687f1-a772-4d0c-b0a1-e7e291004b76 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.34324312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.371415003Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.372172358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.415032252Z" level=info msg="Created container c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=328687f1-a772-4d0c-b0a1-e7e291004b76 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.416868902Z" level=info msg="Starting container: c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276" id=8e15d6e3-7911-4df1-94f0-83d1b178c8d0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.419564584Z" level=info msg="Started container" PID=1801 containerID=c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper id=8e15d6e3-7911-4df1-94f0-83d1b178c8d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e0f2f4bde7b184c5f1dec55e106c292f6d533d135f5f6af3619500092a33fc0a
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.488546752Z" level=info msg="Removing container: 2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d" id=f1840e70-4a99-45c2-b7b9-d3bf6032c046 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:40 old-k8s-version-106430 crio[562]: time="2025-11-01T09:43:40.50288311Z" level=info msg="Removed container 2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl/dashboard-metrics-scraper" id=f1840e70-4a99-45c2-b7b9-d3bf6032c046 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	c56325247c9cf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   e0f2f4bde7b18       dashboard-metrics-scraper-5f989dc9cf-q9fgl       kubernetes-dashboard
	a64f928570c8e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   e0cadb03abc04       kubernetes-dashboard-8694d4445c-xc92m            kubernetes-dashboard
	d6d2f3a1c4ad0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Running             storage-provisioner         1                   bd96391bbf931       storage-provisioner                              kube-system
	18c4c4f61352d       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   86e6b4273dfbd       busybox                                          default
	cf741a18f69cf       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           53 seconds ago      Running             coredns                     0                   783ba29cd48c2       coredns-5dd5756b68-xh2rf                         kube-system
	2cdc9fdfdcd81       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   bd96391bbf931       storage-provisioner                              kube-system
	4691c84ef3d07       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           53 seconds ago      Running             kube-proxy                  0                   11c136e5e6bf0       kube-proxy-zqs8f                                 kube-system
	fe67b21efed6f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   b4d9f74baaca7       kindnet-5v6hn                                    kube-system
	67383aa07ea5a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              0                   465f0cd488238       kube-scheduler-old-k8s-version-106430            kube-system
	21c9e16bfcb6f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              0                   cbf385e5afea3       kube-apiserver-old-k8s-version-106430            kube-system
	227f629919ddd       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     0                   3d1cb6b0f215e       kube-controller-manager-old-k8s-version-106430   kube-system
	2879f0fdda15a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        0                   f07e912331243       etcd-old-k8s-version-106430                      kube-system
	
	
	==> coredns [cf741a18f69cfdad6379c162ce83384fca951d4966d6fad0581fe96cf1e91908] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38017 - 10493 "HINFO IN 4377672268192665766.4869584298609322252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023873201s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-106430
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-106430
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=old-k8s-version-106430
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_41_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:41:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-106430
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:43:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:43:30 +0000   Sat, 01 Nov 2025 09:41:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:43:30 +0000   Sat, 01 Nov 2025 09:41:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:43:30 +0000   Sat, 01 Nov 2025 09:41:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:43:30 +0000   Sat, 01 Nov 2025 09:42:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-106430
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                576f40f7-444f-4b9e-a2cc-82322f1cc662
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-xh2rf                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-106430                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-5v6hn                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-106430             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-106430    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-zqs8f                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-106430             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-q9fgl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-xc92m             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s (x9 over 2m9s)  kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node old-k8s-version-106430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m9s (x7 over 2m9s)  kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s                 kubelet          Node old-k8s-version-106430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s                 kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s                 kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                 node-controller  Node old-k8s-version-106430 event: Registered Node old-k8s-version-106430 in Controller
	  Normal  NodeReady                97s                  kubelet          Node old-k8s-version-106430 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node old-k8s-version-106430 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node old-k8s-version-106430 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                  node-controller  Node old-k8s-version-106430 event: Registered Node old-k8s-version-106430 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [2879f0fdda15ae5930efa2d324aedc5144c2f63543dc974f06fa3e3168b46588] <==
	{"level":"info","ts":"2025-11-01T09:42:57.91674Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:42:57.916775Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-01T09:42:57.918833Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-01T09:42:57.918992Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-01T09:42:57.919051Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-01T09:42:57.919158Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-01T09:42:57.919208Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-01T09:42:59.109001Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-01T09:42:59.109073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-01T09:42:59.109098Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-01T09:42:59.109115Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-11-01T09:42:59.109123Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-01T09:42:59.10913Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-11-01T09:42:59.109137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-01T09:42:59.110206Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-106430 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-01T09:42:59.110218Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:42:59.110248Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-01T09:42:59.11044Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-01T09:42:59.110467Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-01T09:42:59.111585Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-01T09:42:59.111594Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-01T09:43:24.613345Z","caller":"traceutil/trace.go:171","msg":"trace[931700887] linearizableReadLoop","detail":"{readStateIndex:674; appliedIndex:673; }","duration":"119.721627ms","start":"2025-11-01T09:43:24.4936Z","end":"2025-11-01T09:43:24.613321Z","steps":["trace[931700887] 'read index received'  (duration: 27.854568ms)","trace[931700887] 'applied index is now lower than readState.Index'  (duration: 91.866211ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-01T09:43:24.613426Z","caller":"traceutil/trace.go:171","msg":"trace[1894147858] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"162.069439ms","start":"2025-11-01T09:43:24.451332Z","end":"2025-11-01T09:43:24.613402Z","steps":["trace[1894147858] 'process raft request'  (duration: 70.171508ms)","trace[1894147858] 'compare'  (duration: 91.720847ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:43:24.613503Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.90451ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-xh2rf\" ","response":"range_response_count:1 size:4992"}
	{"level":"info","ts":"2025-11-01T09:43:24.613556Z","caller":"traceutil/trace.go:171","msg":"trace[1153651909] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-xh2rf; range_end:; response_count:1; response_revision:646; }","duration":"119.978104ms","start":"2025-11-01T09:43:24.493564Z","end":"2025-11-01T09:43:24.613542Z","steps":["trace[1153651909] 'agreement among raft nodes before linearized reading'  (duration: 119.85112ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:43:54 up  1:26,  0 user,  load average: 4.97, 4.56, 2.96
	Linux old-k8s-version-106430 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fe67b21efed6f525df5544423bd5e06bbcd9f87fc9fcb3d89a75da908e8b778a] <==
	I1101 09:43:00.984500       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:43:01.003315       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:43:01.003512       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:43:01.003529       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:43:01.003561       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:43:01Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:43:01.283671       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:43:01.283706       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:43:01.283718       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:43:01.303317       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:43:01.680447       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:43:01.680482       1 metrics.go:72] Registering metrics
	I1101 09:43:01.680545       1 controller.go:711] "Syncing nftables rules"
	I1101 09:43:11.283653       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:43:11.283760       1 main.go:301] handling current node
	I1101 09:43:21.285453       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:43:21.285507       1 main.go:301] handling current node
	I1101 09:43:31.283900       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:43:31.283975       1 main.go:301] handling current node
	I1101 09:43:41.283425       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:43:41.283464       1 main.go:301] handling current node
	I1101 09:43:51.285487       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1101 09:43:51.285541       1 main.go:301] handling current node
	
	
	==> kube-apiserver [21c9e16bfcb6f8965fbdbbf8b9f68b535b2252e3a9d58fe71811900f43d0178a] <==
	I1101 09:43:00.182258       1 aggregator.go:166] initial CRD sync complete...
	I1101 09:43:00.182294       1 autoregister_controller.go:141] Starting autoregister controller
	I1101 09:43:00.182317       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:43:00.196084       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1101 09:43:00.244756       1 shared_informer.go:318] Caches are synced for configmaps
	I1101 09:43:00.245131       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1101 09:43:00.245201       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:43:00.245968       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1101 09:43:00.254981       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1101 09:43:00.255175       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1101 09:43:00.279978       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:43:00.290039       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:43:01.151092       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:43:01.267974       1 controller.go:624] quota admission added evaluator for: namespaces
	I1101 09:43:01.305837       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1101 09:43:01.331694       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:43:01.345590       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:43:01.354876       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1101 09:43:01.412843       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.71.114"}
	I1101 09:43:01.436710       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.242.166"}
	I1101 09:43:13.251650       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:43:13.251703       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:43:13.651527       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 09:43:13.651528       1 controller.go:624] quota admission added evaluator for: endpoints
	I1101 09:43:13.702159       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [227f629919dddfb2b5ef168af9cb9b28faa37ce01740e96b97f11cdff132e1a4] <==
	I1101 09:43:13.506320       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="291.465168ms"
	I1101 09:43:13.506457       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.504µs"
	I1101 09:43:13.705842       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1101 09:43:13.707670       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1101 09:43:13.717548       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-xc92m"
	I1101 09:43:13.717606       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-q9fgl"
	I1101 09:43:13.722574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.391188ms"
	I1101 09:43:13.725150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.713216ms"
	I1101 09:43:13.731472       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="6.213281ms"
	I1101 09:43:13.731701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="122.084µs"
	I1101 09:43:13.737978       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.345453ms"
	I1101 09:43:13.738070       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="51.579µs"
	I1101 09:43:13.746599       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="48.836µs"
	I1101 09:43:13.770114       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:43:13.775481       1 shared_informer.go:318] Caches are synced for garbage collector
	I1101 09:43:13.775516       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1101 09:43:18.443079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.336944ms"
	I1101 09:43:18.443373       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="110.947µs"
	I1101 09:43:22.441831       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="108.26µs"
	I1101 09:43:23.447147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.578µs"
	I1101 09:43:24.614980       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="132.898µs"
	I1101 09:43:34.197649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.366277ms"
	I1101 09:43:34.197884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="170.426µs"
	I1101 09:43:40.509811       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.347µs"
	I1101 09:43:44.046907       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.937µs"
	
	
	==> kube-proxy [4691c84ef3d07ba57728ac09a4552d6f8bf0fcc54705555278513250908efe00] <==
	I1101 09:43:00.784230       1 server_others.go:69] "Using iptables proxy"
	I1101 09:43:00.801202       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1101 09:43:00.826568       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:43:00.829987       1 server_others.go:152] "Using iptables Proxier"
	I1101 09:43:00.830032       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1101 09:43:00.830041       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1101 09:43:00.830085       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1101 09:43:00.830429       1 server.go:846] "Version info" version="v1.28.0"
	I1101 09:43:00.830452       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:00.831352       1 config.go:97] "Starting endpoint slice config controller"
	I1101 09:43:00.831382       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1101 09:43:00.831426       1 config.go:188] "Starting service config controller"
	I1101 09:43:00.831432       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1101 09:43:00.831457       1 config.go:315] "Starting node config controller"
	I1101 09:43:00.831461       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1101 09:43:00.932052       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1101 09:43:00.932263       1 shared_informer.go:318] Caches are synced for node config
	I1101 09:43:00.932352       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [67383aa07ea5a571b5780306e02b652d4100444e7d3375f13add5b076ff05a91] <==
	I1101 09:42:58.192019       1 serving.go:348] Generated self-signed cert in-memory
	I1101 09:43:00.230004       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1101 09:43:00.230031       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:00.233991       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1101 09:43:00.234028       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1101 09:43:00.234098       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:43:00.234122       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 09:43:00.234095       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:00.234160       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 09:43:00.235034       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1101 09:43:00.235175       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1101 09:43:00.334546       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1101 09:43:00.334627       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1101 09:43:00.334686       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.723768     722 topology_manager.go:215] "Topology Admit Handler" podUID="79c2ef77-baca-4182-8bd9-a64e4379615f" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-xc92m"
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.724158     722 topology_manager.go:215] "Topology Admit Handler" podUID="e6e50343-6215-403c-859a-a0fca77e0e83" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-q9fgl"
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.858503     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/79c2ef77-baca-4182-8bd9-a64e4379615f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-xc92m\" (UID: \"79c2ef77-baca-4182-8bd9-a64e4379615f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xc92m"
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.858560     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cqtt\" (UniqueName: \"kubernetes.io/projected/79c2ef77-baca-4182-8bd9-a64e4379615f-kube-api-access-2cqtt\") pod \"kubernetes-dashboard-8694d4445c-xc92m\" (UID: \"79c2ef77-baca-4182-8bd9-a64e4379615f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xc92m"
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.858591     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/e6e50343-6215-403c-859a-a0fca77e0e83-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-q9fgl\" (UID: \"e6e50343-6215-403c-859a-a0fca77e0e83\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl"
	Nov 01 09:43:13 old-k8s-version-106430 kubelet[722]: I1101 09:43:13.858689     722 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r84x\" (UniqueName: \"kubernetes.io/projected/e6e50343-6215-403c-859a-a0fca77e0e83-kube-api-access-7r84x\") pod \"dashboard-metrics-scraper-5f989dc9cf-q9fgl\" (UID: \"e6e50343-6215-403c-859a-a0fca77e0e83\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl"
	Nov 01 09:43:22 old-k8s-version-106430 kubelet[722]: I1101 09:43:22.429717     722 scope.go:117] "RemoveContainer" containerID="dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac"
	Nov 01 09:43:22 old-k8s-version-106430 kubelet[722]: I1101 09:43:22.441869     722 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-xc92m" podStartSLOduration=5.254453001 podCreationTimestamp="2025-11-01 09:43:13 +0000 UTC" firstStartedPulling="2025-11-01 09:43:14.049503488 +0000 UTC m=+16.808114186" lastFinishedPulling="2025-11-01 09:43:18.23685922 +0000 UTC m=+20.995469913" observedRunningTime="2025-11-01 09:43:18.430487445 +0000 UTC m=+21.189098157" watchObservedRunningTime="2025-11-01 09:43:22.441808728 +0000 UTC m=+25.200419440"
	Nov 01 09:43:23 old-k8s-version-106430 kubelet[722]: I1101 09:43:23.434498     722 scope.go:117] "RemoveContainer" containerID="dbd602e00beb15b3cb940fd593f9022794fe19b1209b6e7ddc4d154476aba1ac"
	Nov 01 09:43:23 old-k8s-version-106430 kubelet[722]: I1101 09:43:23.434657     722 scope.go:117] "RemoveContainer" containerID="2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d"
	Nov 01 09:43:23 old-k8s-version-106430 kubelet[722]: E1101 09:43:23.435047     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q9fgl_kubernetes-dashboard(e6e50343-6215-403c-859a-a0fca77e0e83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl" podUID="e6e50343-6215-403c-859a-a0fca77e0e83"
	Nov 01 09:43:24 old-k8s-version-106430 kubelet[722]: I1101 09:43:24.440172     722 scope.go:117] "RemoveContainer" containerID="2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d"
	Nov 01 09:43:24 old-k8s-version-106430 kubelet[722]: E1101 09:43:24.440585     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q9fgl_kubernetes-dashboard(e6e50343-6215-403c-859a-a0fca77e0e83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl" podUID="e6e50343-6215-403c-859a-a0fca77e0e83"
	Nov 01 09:43:25 old-k8s-version-106430 kubelet[722]: I1101 09:43:25.441977     722 scope.go:117] "RemoveContainer" containerID="2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d"
	Nov 01 09:43:25 old-k8s-version-106430 kubelet[722]: E1101 09:43:25.442223     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q9fgl_kubernetes-dashboard(e6e50343-6215-403c-859a-a0fca77e0e83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl" podUID="e6e50343-6215-403c-859a-a0fca77e0e83"
	Nov 01 09:43:40 old-k8s-version-106430 kubelet[722]: I1101 09:43:40.337983     722 scope.go:117] "RemoveContainer" containerID="2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d"
	Nov 01 09:43:40 old-k8s-version-106430 kubelet[722]: I1101 09:43:40.485654     722 scope.go:117] "RemoveContainer" containerID="2650e2facd7ebfed45bab2654801c9113c53b89250976d014266fe3ad88b908d"
	Nov 01 09:43:40 old-k8s-version-106430 kubelet[722]: I1101 09:43:40.486142     722 scope.go:117] "RemoveContainer" containerID="c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276"
	Nov 01 09:43:40 old-k8s-version-106430 kubelet[722]: E1101 09:43:40.487791     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q9fgl_kubernetes-dashboard(e6e50343-6215-403c-859a-a0fca77e0e83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl" podUID="e6e50343-6215-403c-859a-a0fca77e0e83"
	Nov 01 09:43:44 old-k8s-version-106430 kubelet[722]: I1101 09:43:44.027719     722 scope.go:117] "RemoveContainer" containerID="c56325247c9cf1854cfd85510e2c244d314627130c4a6a3158fbe4502d8da276"
	Nov 01 09:43:44 old-k8s-version-106430 kubelet[722]: E1101 09:43:44.028099     722 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q9fgl_kubernetes-dashboard(e6e50343-6215-403c-859a-a0fca77e0e83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q9fgl" podUID="e6e50343-6215-403c-859a-a0fca77e0e83"
	Nov 01 09:43:48 old-k8s-version-106430 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:43:48 old-k8s-version-106430 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:43:48 old-k8s-version-106430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:43:48 old-k8s-version-106430 systemd[1]: kubelet.service: Consumed 1.635s CPU time.
	
	
	==> kubernetes-dashboard [a64f928570c8e93d7275efca3d34ba9452ed83d5461da05e9ccb47d00976bc06] <==
	2025/11/01 09:43:18 Using namespace: kubernetes-dashboard
	2025/11/01 09:43:18 Using in-cluster config to connect to apiserver
	2025/11/01 09:43:18 Using secret token for csrf signing
	2025/11/01 09:43:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:43:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:43:18 Successful initial request to the apiserver, version: v1.28.0
	2025/11/01 09:43:18 Generating JWE encryption key
	2025/11/01 09:43:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:43:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:43:18 Initializing JWE encryption key from synchronized object
	2025/11/01 09:43:18 Creating in-cluster Sidecar client
	2025/11/01 09:43:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:18 Serving insecurely on HTTP port: 9090
	2025/11/01 09:43:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:18 Starting overwatch
	
	
	==> storage-provisioner [2cdc9fdfdcd814051d3dd77cdf55c61477f757879bf74593ca0dd53e09115dbc] <==
	I1101 09:43:00.746410       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:43:00.748190       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [d6d2f3a1c4ad0645664556baf4dc3811e4149eaae2198bcfc7acb38d3e3375d9] <==
	I1101 09:43:01.430811       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1101 09:43:01.440327       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1101 09:43:01.440416       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1101 09:43:18.844664       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:43:18.844821       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-106430_b56e0a47-aa2b-4b3a-8183-1a69727715b6!
	I1101 09:43:18.844806       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d7c9887-d56d-4587-80ec-07ecbd12d0c2", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-106430_b56e0a47-aa2b-4b3a-8183-1a69727715b6 became leader
	I1101 09:43:18.946037       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-106430_b56e0a47-aa2b-4b3a-8183-1a69727715b6!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-106430 -n old-k8s-version-106430
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-106430 -n old-k8s-version-106430: exit status 2 (453.464194ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-106430 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (8.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (8.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-224845 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-224845 --alsologtostderr -v=1: exit status 80 (2.428275844s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-224845 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:44:00.346089  422675 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:00.346651  422675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.346665  422675 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:00.346670  422675 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.346929  422675 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:00.347226  422675 out.go:368] Setting JSON to false
	I1101 09:44:00.347268  422675 mustload.go:66] Loading cluster: no-preload-224845
	I1101 09:44:00.347709  422675 config.go:182] Loaded profile config "no-preload-224845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.348160  422675 cli_runner.go:164] Run: docker container inspect no-preload-224845 --format={{.State.Status}}
	I1101 09:44:00.368154  422675 host.go:66] Checking if "no-preload-224845" exists ...
	I1101 09:44:00.368571  422675 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:00.434726  422675 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:81 SystemTime:2025-11-01 09:44:00.42186915 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:00.435566  422675 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-224845 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:44:00.437945  422675 out.go:179] * Pausing node no-preload-224845 ... 
	I1101 09:44:00.439120  422675 host.go:66] Checking if "no-preload-224845" exists ...
	I1101 09:44:00.439496  422675 ssh_runner.go:195] Run: systemctl --version
	I1101 09:44:00.439557  422675 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-224845
	I1101 09:44:00.459368  422675 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/no-preload-224845/id_rsa Username:docker}
	I1101 09:44:00.563758  422675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:00.579321  422675 pause.go:52] kubelet running: true
	I1101 09:44:00.579391  422675 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:00.761097  422675 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:00.761202  422675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:00.841836  422675 cri.go:89] found id: "7a65dc67a386fdf7a450e9872bb255dcbdafd029b0d8d23a977e370e3a0946ca"
	I1101 09:44:00.841872  422675 cri.go:89] found id: "b1d80070c7b62b4ce3881325918d5ec26625a7c59e81595495812d129d8df397"
	I1101 09:44:00.841876  422675 cri.go:89] found id: "46d64cccca50af2f52d98c41ed6ff7c7a6488ff68d47f87aab2bc38588c05601"
	I1101 09:44:00.841880  422675 cri.go:89] found id: "ab26c6524a0b1a9d55101400960c2a25417db0dca853ddac83d92c109aa50030"
	I1101 09:44:00.841883  422675 cri.go:89] found id: "12e0108d20913a3c9ee251ccc9bbbd1accfdf1c4dbf353477809b6ed74cfb5b2"
	I1101 09:44:00.841887  422675 cri.go:89] found id: "b86d55ba3c7fb8a98bf1426460718ba3297031e3b104e7f3f0f54dd2191f42f9"
	I1101 09:44:00.841891  422675 cri.go:89] found id: "4c03b3a94925c1523c1521759ebdf9c97a75c228e24c8af2033855424a4b0819"
	I1101 09:44:00.841896  422675 cri.go:89] found id: "9785973a66a04a46a343a8714f4f3ed4c9a422a4aa4c161fb46c9bc2f6bb8b09"
	I1101 09:44:00.841900  422675 cri.go:89] found id: "63567a1b6f89e42a1a75739ad77e7056590c8aca6daed13ac49ddcd23f14e41b"
	I1101 09:44:00.841943  422675 cri.go:89] found id: "a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59"
	I1101 09:44:00.841953  422675 cri.go:89] found id: "2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06"
	I1101 09:44:00.841958  422675 cri.go:89] found id: ""
	I1101 09:44:00.842018  422675 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:00.857240  422675 retry.go:31] will retry after 236.244458ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:00Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:01.093678  422675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:01.108324  422675 pause.go:52] kubelet running: false
	I1101 09:44:01.108388  422675 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:01.285655  422675 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:01.285751  422675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:01.361886  422675 cri.go:89] found id: "7a65dc67a386fdf7a450e9872bb255dcbdafd029b0d8d23a977e370e3a0946ca"
	I1101 09:44:01.361973  422675 cri.go:89] found id: "b1d80070c7b62b4ce3881325918d5ec26625a7c59e81595495812d129d8df397"
	I1101 09:44:01.361986  422675 cri.go:89] found id: "46d64cccca50af2f52d98c41ed6ff7c7a6488ff68d47f87aab2bc38588c05601"
	I1101 09:44:01.361992  422675 cri.go:89] found id: "ab26c6524a0b1a9d55101400960c2a25417db0dca853ddac83d92c109aa50030"
	I1101 09:44:01.361999  422675 cri.go:89] found id: "12e0108d20913a3c9ee251ccc9bbbd1accfdf1c4dbf353477809b6ed74cfb5b2"
	I1101 09:44:01.362005  422675 cri.go:89] found id: "b86d55ba3c7fb8a98bf1426460718ba3297031e3b104e7f3f0f54dd2191f42f9"
	I1101 09:44:01.362012  422675 cri.go:89] found id: "4c03b3a94925c1523c1521759ebdf9c97a75c228e24c8af2033855424a4b0819"
	I1101 09:44:01.362017  422675 cri.go:89] found id: "9785973a66a04a46a343a8714f4f3ed4c9a422a4aa4c161fb46c9bc2f6bb8b09"
	I1101 09:44:01.362024  422675 cri.go:89] found id: "63567a1b6f89e42a1a75739ad77e7056590c8aca6daed13ac49ddcd23f14e41b"
	I1101 09:44:01.362032  422675 cri.go:89] found id: "a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59"
	I1101 09:44:01.362039  422675 cri.go:89] found id: "2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06"
	I1101 09:44:01.362043  422675 cri.go:89] found id: ""
	I1101 09:44:01.362093  422675 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:01.376958  422675 retry.go:31] will retry after 276.751859ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:01Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:01.654526  422675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:01.671037  422675 pause.go:52] kubelet running: false
	I1101 09:44:01.671106  422675 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:01.825415  422675 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:01.825540  422675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:01.910375  422675 cri.go:89] found id: "7a65dc67a386fdf7a450e9872bb255dcbdafd029b0d8d23a977e370e3a0946ca"
	I1101 09:44:01.910408  422675 cri.go:89] found id: "b1d80070c7b62b4ce3881325918d5ec26625a7c59e81595495812d129d8df397"
	I1101 09:44:01.910415  422675 cri.go:89] found id: "46d64cccca50af2f52d98c41ed6ff7c7a6488ff68d47f87aab2bc38588c05601"
	I1101 09:44:01.910420  422675 cri.go:89] found id: "ab26c6524a0b1a9d55101400960c2a25417db0dca853ddac83d92c109aa50030"
	I1101 09:44:01.910425  422675 cri.go:89] found id: "12e0108d20913a3c9ee251ccc9bbbd1accfdf1c4dbf353477809b6ed74cfb5b2"
	I1101 09:44:01.910430  422675 cri.go:89] found id: "b86d55ba3c7fb8a98bf1426460718ba3297031e3b104e7f3f0f54dd2191f42f9"
	I1101 09:44:01.910434  422675 cri.go:89] found id: "4c03b3a94925c1523c1521759ebdf9c97a75c228e24c8af2033855424a4b0819"
	I1101 09:44:01.910438  422675 cri.go:89] found id: "9785973a66a04a46a343a8714f4f3ed4c9a422a4aa4c161fb46c9bc2f6bb8b09"
	I1101 09:44:01.910443  422675 cri.go:89] found id: "63567a1b6f89e42a1a75739ad77e7056590c8aca6daed13ac49ddcd23f14e41b"
	I1101 09:44:01.910464  422675 cri.go:89] found id: "a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59"
	I1101 09:44:01.910472  422675 cri.go:89] found id: "2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06"
	I1101 09:44:01.910477  422675 cri.go:89] found id: ""
	I1101 09:44:01.910530  422675 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:01.924088  422675 retry.go:31] will retry after 476.046717ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:01Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:02.400566  422675 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:02.415003  422675 pause.go:52] kubelet running: false
	I1101 09:44:02.415071  422675 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:02.596307  422675 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:02.596408  422675 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:02.674270  422675 cri.go:89] found id: "7a65dc67a386fdf7a450e9872bb255dcbdafd029b0d8d23a977e370e3a0946ca"
	I1101 09:44:02.674292  422675 cri.go:89] found id: "b1d80070c7b62b4ce3881325918d5ec26625a7c59e81595495812d129d8df397"
	I1101 09:44:02.674297  422675 cri.go:89] found id: "46d64cccca50af2f52d98c41ed6ff7c7a6488ff68d47f87aab2bc38588c05601"
	I1101 09:44:02.674300  422675 cri.go:89] found id: "ab26c6524a0b1a9d55101400960c2a25417db0dca853ddac83d92c109aa50030"
	I1101 09:44:02.674303  422675 cri.go:89] found id: "12e0108d20913a3c9ee251ccc9bbbd1accfdf1c4dbf353477809b6ed74cfb5b2"
	I1101 09:44:02.674314  422675 cri.go:89] found id: "b86d55ba3c7fb8a98bf1426460718ba3297031e3b104e7f3f0f54dd2191f42f9"
	I1101 09:44:02.674317  422675 cri.go:89] found id: "4c03b3a94925c1523c1521759ebdf9c97a75c228e24c8af2033855424a4b0819"
	I1101 09:44:02.674319  422675 cri.go:89] found id: "9785973a66a04a46a343a8714f4f3ed4c9a422a4aa4c161fb46c9bc2f6bb8b09"
	I1101 09:44:02.674322  422675 cri.go:89] found id: "63567a1b6f89e42a1a75739ad77e7056590c8aca6daed13ac49ddcd23f14e41b"
	I1101 09:44:02.674338  422675 cri.go:89] found id: "a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59"
	I1101 09:44:02.674340  422675 cri.go:89] found id: "2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06"
	I1101 09:44:02.674343  422675 cri.go:89] found id: ""
	I1101 09:44:02.674384  422675 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:02.693328  422675 out.go:203] 
	W1101 09:44:02.694619  422675 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:44:02.694661  422675 out.go:285] * 
	* 
	W1101 09:44:02.699144  422675 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:44:02.700579  422675 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-224845 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-224845
helpers_test.go:243: (dbg) docker inspect no-preload-224845:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8",
	        "Created": "2025-11-01T09:41:51.624954273Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 412590,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:43:19.705192231Z",
	            "FinishedAt": "2025-11-01T09:43:18.115686706Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/hostname",
	        "HostsPath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/hosts",
	        "LogPath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8-json.log",
	        "Name": "/no-preload-224845",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-224845:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-224845",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8",
	                "LowerDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-224845",
	                "Source": "/var/lib/docker/volumes/no-preload-224845/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-224845",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-224845",
	                "name.minikube.sigs.k8s.io": "no-preload-224845",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f4118b63dff2437fa58c274e4b3a920160812945a093da7cf07ae840e301f9d",
	            "SandboxKey": "/var/run/docker/netns/8f4118b63dff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-224845": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:45:c1:ed:03:dd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fd9ea47f59972660007e0e7f49bc24269f3213f6370bb54b3108ffd5b79a05aa",
	                    "EndpointID": "119a7db29405e488f34018d729800a89e1d33982fab94506e0518fb8f5b6c07c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-224845",
	                        "968b2e1f8788"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-224845 -n no-preload-224845
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-224845 -n no-preload-224845: exit status 2 (373.271396ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-224845 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-224845 logs -n 25: (2.898841877s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-309397                                                                                                                                                                                                               │ disable-driver-mounts-309397 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-106430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p old-k8s-version-106430 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-106430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p no-preload-224845 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ stop    │ -p embed-certs-214580 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ stop    │ -p default-k8s-diff-port-927869 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p no-preload-224845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-214580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ image   │ old-k8s-version-106430 image list --format=json                                                                                                                                                                                               │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ pause   │ -p old-k8s-version-106430 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ no-preload-224845 image list --format=json                                                                                                                                                                                                    │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p no-preload-224845 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:44:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:44:00.823522  422921 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:00.823684  422921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.823696  422921 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:00.823702  422921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.823906  422921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:00.824429  422921 out.go:368] Setting JSON to false
	I1101 09:44:00.825935  422921 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5179,"bootTime":1761985062,"procs":518,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:44:00.826062  422921 start.go:143] virtualization: kvm guest
	I1101 09:44:00.828080  422921 out.go:179] * [newest-cni-722387] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:44:00.829516  422921 notify.go:221] Checking for updates...
	I1101 09:44:00.829545  422921 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:44:00.831103  422921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:44:00.832421  422921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:00.833671  422921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:44:00.835236  422921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:44:00.836312  422921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:44:00.838662  422921 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.838859  422921 config.go:182] Loaded profile config "embed-certs-214580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.839032  422921 config.go:182] Loaded profile config "no-preload-224845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.839168  422921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:44:00.868651  422921 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:44:00.868776  422921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:00.932313  422921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:44:00.919582405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:00.932410  422921 docker.go:319] overlay module found
	I1101 09:44:00.934186  422921 out.go:179] * Using the docker driver based on user configuration
	I1101 09:44:00.935396  422921 start.go:309] selected driver: docker
	I1101 09:44:00.935426  422921 start.go:930] validating driver "docker" against <nil>
	I1101 09:44:00.935441  422921 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:44:00.936076  422921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:00.998903  422921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:44:00.988574943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:00.999261  422921 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 09:44:00.999309  422921 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 09:44:00.999988  422921 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:01.001892  422921 out.go:179] * Using Docker driver with root privileges
	I1101 09:44:01.003008  422921 cni.go:84] Creating CNI manager for ""
	I1101 09:44:01.003093  422921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:01.003109  422921 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:44:01.003194  422921 start.go:353] cluster config:
	{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:01.004455  422921 out.go:179] * Starting "newest-cni-722387" primary control-plane node in "newest-cni-722387" cluster
	I1101 09:44:01.005836  422921 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:44:01.007040  422921 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:44:01.008185  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:01.008213  422921 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:44:01.008239  422921 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:44:01.008255  422921 cache.go:59] Caching tarball of preloaded images
	I1101 09:44:01.008363  422921 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:44:01.008379  422921 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:44:01.008553  422921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:01.008588  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json: {Name:mk9b2e752fcdc3711c80d757637de7b71a85dab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:01.031509  422921 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:44:01.031532  422921 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:44:01.031549  422921 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:44:01.031586  422921 start.go:360] acquireMachinesLock for newest-cni-722387: {Name:mk940a2cf467ead4a4947b13278d9e50da243cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:44:01.031708  422921 start.go:364] duration metric: took 99.393µs to acquireMachinesLock for "newest-cni-722387"
	I1101 09:44:01.031740  422921 start.go:93] Provisioning new machine with config: &{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:44:01.031822  422921 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 01 09:43:42 no-preload-224845 crio[563]: time="2025-11-01T09:43:42.602989283Z" level=info msg="Created container c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper" id=493aa63f-c579-45de-be7f-9fccf774b49a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:42 no-preload-224845 crio[563]: time="2025-11-01T09:43:42.603785152Z" level=info msg="Starting container: c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f" id=1a52cb10-d357-47ba-ba1c-8537603e7d22 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:42 no-preload-224845 crio[563]: time="2025-11-01T09:43:42.606019474Z" level=info msg="Started container" PID=1681 containerID=c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper id=1a52cb10-d357-47ba-ba1c-8537603e7d22 name=/runtime.v1.RuntimeService/StartContainer sandboxID=08aa98263d887e1418e429b990979f3666323643e11d0d35edaeb010611870e2
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.571395048Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d15da47f-a482-43ca-9fb0-1664c4378634 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.577560257Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=07fbe20b-9609-43c5-ac8e-f8e50c1b77b2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.583170631Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper" id=cc25be3d-66c6-4f61-8465-17a821da2e48 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.58349018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.595972704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.596669223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.652477583Z" level=info msg="Created container 2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper" id=cc25be3d-66c6-4f61-8465-17a821da2e48 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.654200455Z" level=info msg="Starting container: 2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06" id=4b833659-2865-45f0-8fbc-54e9f3c2974d name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.658287198Z" level=info msg="Started container" PID=1690 containerID=2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper id=4b833659-2865-45f0-8fbc-54e9f3c2974d name=/runtime.v1.RuntimeService/StartContainer sandboxID=08aa98263d887e1418e429b990979f3666323643e11d0d35edaeb010611870e2
	Nov 01 09:43:44 no-preload-224845 crio[563]: time="2025-11-01T09:43:44.588309439Z" level=info msg="Removing container: c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f" id=d14b788a-65d6-4db9-9c9d-301b88e6cabc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:44 no-preload-224845 crio[563]: time="2025-11-01T09:43:44.604928996Z" level=info msg="Removed container c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper" id=d14b788a-65d6-4db9-9c9d-301b88e6cabc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.024187276Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=842571c8-18e9-476e-a25d-d43eee88c0e3 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.024873983Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=54dfb8a1-fc67-4465-95dd-706f5aa26a2b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.026903975Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=7a7b8a06-bea0-47c1-a45d-9a24efbaac39 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.031599977Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6/kubernetes-dashboard" id=d1bf37d1-7af5-467b-9773-1131fb6743d0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.031748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.036651535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.036964021Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5c21b2a1dd2600cd50a6aa6905753c2c8eb0ecd1ae8748612da6e5ad2e0356c5/merged/etc/group: no such file or directory"
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.037399022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.07655994Z" level=info msg="Created container a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6/kubernetes-dashboard" id=d1bf37d1-7af5-467b-9773-1131fb6743d0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.07746121Z" level=info msg="Starting container: a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59" id=3791408d-a0b3-40a1-b0d0-daa01ddbf16f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.079903826Z" level=info msg="Started container" PID=1742 containerID=a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6/kubernetes-dashboard id=3791408d-a0b3-40a1-b0d0-daa01ddbf16f name=/runtime.v1.RuntimeService/StartContainer sandboxID=9bde72670aa9b99a3907f512db88833559bedb915c624d31d72f53e71c9c0ea2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a2c0080e3c6d3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   15 seconds ago      Running             kubernetes-dashboard        0                   9bde72670aa9b       kubernetes-dashboard-855c9754f9-fbzt6        kubernetes-dashboard
	2598363a2aa16       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago      Exited              dashboard-metrics-scraper   1                   08aa98263d887       dashboard-metrics-scraper-6ffb444bf9-xhszq   kubernetes-dashboard
	7a65dc67a386f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           30 seconds ago      Running             coredns                     0                   987f846e1f796       coredns-66bc5c9577-8qn69                     kube-system
	549082ebfc230       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           30 seconds ago      Running             busybox                     1                   5b5f82446b88f       busybox                                      default
	b1d80070c7b62       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           33 seconds ago      Running             storage-provisioner         1                   20493fe3f4c1c       storage-provisioner                          kube-system
	46d64cccca50a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           34 seconds ago      Running             kindnet-cni                 0                   3d0dbb3c87f66       kindnet-24485                                kube-system
	ab26c6524a0b1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           34 seconds ago      Running             kube-proxy                  0                   1638ed5c88742       kube-proxy-f2f64                             kube-system
	12e0108d20913       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           34 seconds ago      Exited              storage-provisioner         0                   20493fe3f4c1c       storage-provisioner                          kube-system
	b86d55ba3c7fb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           36 seconds ago      Running             etcd                        0                   9e5b54323fef3       etcd-no-preload-224845                       kube-system
	4c03b3a94925c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           36 seconds ago      Running             kube-scheduler              0                   e4d7bd9a9555f       kube-scheduler-no-preload-224845             kube-system
	9785973a66a04       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           36 seconds ago      Running             kube-controller-manager     0                   cd0099f8d4ee2       kube-controller-manager-no-preload-224845    kube-system
	63567a1b6f89e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           36 seconds ago      Running             kube-apiserver              0                   514db294786db       kube-apiserver-no-preload-224845             kube-system
	
	
	==> coredns [7a65dc67a386fdf7a450e9872bb255dcbdafd029b0d8d23a977e370e3a0946ca] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49894 - 34673 "HINFO IN 6937972584737186280.444459367520885585. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.024140718s
	
	
	==> describe nodes <==
	Name:               no-preload-224845
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-224845
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=no-preload-224845
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_42_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:42:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-224845
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:43:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:43:59 +0000   Sat, 01 Nov 2025 09:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:43:59 +0000   Sat, 01 Nov 2025 09:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:43:59 +0000   Sat, 01 Nov 2025 09:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:43:59 +0000   Sat, 01 Nov 2025 09:43:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-224845
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                3cec4f20-c471-4766-a85c-05fa10e538f8
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 coredns-66bc5c9577-8qn69                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     90s
	  kube-system                 etcd-no-preload-224845                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         96s
	  kube-system                 kindnet-24485                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      90s
	  kube-system                 kube-apiserver-no-preload-224845              250m (3%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-no-preload-224845     200m (2%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-proxy-f2f64                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-no-preload-224845              100m (1%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-xhszq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fbzt6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 33s                  kube-proxy       
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x8 over 102s)  kubelet          Node no-preload-224845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x8 over 102s)  kubelet          Node no-preload-224845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s (x8 over 102s)  kubelet          Node no-preload-224845 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node no-preload-224845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node no-preload-224845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node no-preload-224845 status is now: NodeHasSufficientPID
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           91s                  node-controller  Node no-preload-224845 event: Registered Node no-preload-224845 in Controller
	  Normal  NodeReady                77s                  kubelet          Node no-preload-224845 status is now: NodeReady
	  Normal  Starting                 37s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)    kubelet          Node no-preload-224845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)    kubelet          Node no-preload-224845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)    kubelet          Node no-preload-224845 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                  node-controller  Node no-preload-224845 event: Registered Node no-preload-224845 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [b86d55ba3c7fb8a98bf1426460718ba3297031e3b104e7f3f0f54dd2191f42f9] <==
	{"level":"warn","ts":"2025-11-01T09:43:28.331859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.338023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.344182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.360820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.367282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.373645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.380373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.392961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.399744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.409083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.416179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.422281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.428778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.434849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.441263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.447895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.454382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.460653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.466577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.473236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.480108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.491106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.498313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.504537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.550306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37074","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:44:04 up  1:26,  0 user,  load average: 13.22, 6.40, 3.58
	Linux no-preload-224845 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [46d64cccca50af2f52d98c41ed6ff7c7a6488ff68d47f87aab2bc38588c05601] <==
	I1101 09:43:29.991896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:43:29.992289       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:43:29.992454       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:43:29.992475       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:43:29.992506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:43:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:43:30.283252       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:43:30.283284       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:43:30.283314       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:43:30.382821       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:43:30.783999       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:43:30.784029       1 metrics.go:72] Registering metrics
	I1101 09:43:30.784083       1 controller.go:711] "Syncing nftables rules"
	I1101 09:43:40.283535       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:43:40.283731       1 main.go:301] handling current node
	I1101 09:43:50.284184       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:43:50.284225       1 main.go:301] handling current node
	I1101 09:44:00.283123       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:44:00.283173       1 main.go:301] handling current node
	
	
	==> kube-apiserver [63567a1b6f89e42a1a75739ad77e7056590c8aca6daed13ac49ddcd23f14e41b] <==
	I1101 09:43:29.018199       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:43:29.020220       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:43:29.020289       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:43:29.020354       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:43:29.020416       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:43:29.020448       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:43:29.020470       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:43:29.020478       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:43:29.020485       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:43:29.020606       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:43:29.020609       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:43:29.026536       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:43:29.026834       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:43:29.039184       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:43:29.302783       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:43:29.331634       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:43:29.351776       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:43:29.363371       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:43:29.371058       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:43:29.408289       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.59.131"}
	I1101 09:43:29.420176       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.81.201"}
	I1101 09:43:29.920022       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:43:32.386735       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:43:32.736860       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:43:32.788669       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9785973a66a04a46a343a8714f4f3ed4c9a422a4aa4c161fb46c9bc2f6bb8b09] <==
	I1101 09:43:32.384327       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:43:32.384347       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:43:32.384363       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:43:32.384388       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:43:32.384418       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:43:32.384501       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:43:32.384564       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:43:32.384587       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:43:32.384663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:43:32.385630       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:43:32.385655       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:43:32.385692       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:43:32.388942       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:43:32.389029       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:43:32.389067       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:43:32.389117       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:43:32.389156       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:43:32.389165       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:43:32.389176       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:43:32.390369       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:43:32.391076       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:43:32.393237       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:43:32.395535       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:43:32.410007       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:43:42.336110       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ab26c6524a0b1a9d55101400960c2a25417db0dca853ddac83d92c109aa50030] <==
	I1101 09:43:29.866394       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:43:29.952419       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:43:30.053111       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:43:30.053164       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:43:30.053268       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:43:30.074857       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:43:30.074943       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:43:30.082024       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:43:30.082546       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:43:30.082580       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:30.085903       1 config.go:200] "Starting service config controller"
	I1101 09:43:30.085943       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:43:30.086152       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:43:30.086176       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:43:30.086171       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:43:30.086196       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:43:30.086767       1 config.go:309] "Starting node config controller"
	I1101 09:43:30.086849       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:43:30.186127       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:43:30.186762       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:43:30.187974       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:43:30.187992       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4c03b3a94925c1523c1521759ebdf9c97a75c228e24c8af2033855424a4b0819] <==
	I1101 09:43:27.315799       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:43:28.993304       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:43:28.993418       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:29.001863       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:43:29.001926       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:43:29.002002       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:29.002027       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:29.002085       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:43:29.002136       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:43:29.002321       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:43:29.002581       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:43:29.102338       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:29.102362       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:43:29.102362       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:43:30 no-preload-224845 kubelet[708]: E1101 09:43:30.251365     708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e4e4413-d3b7-4a5f-b088-241e94f310a4-kube-api-access-knnxq podName:9e4e4413-d3b7-4a5f-b088-241e94f310a4 nodeName:}" failed. No retries permitted until 2025-11-01 09:43:31.251348572 +0000 UTC m=+4.844972456 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-knnxq" (UniqueName: "kubernetes.io/projected/9e4e4413-d3b7-4a5f-b088-241e94f310a4-kube-api-access-knnxq") pod "busybox" (UID: "9e4e4413-d3b7-4a5f-b088-241e94f310a4") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 09:43:30 no-preload-224845 kubelet[708]: I1101 09:43:30.521455     708 scope.go:117] "RemoveContainer" containerID="12e0108d20913a3c9ee251ccc9bbbd1accfdf1c4dbf353477809b6ed74cfb5b2"
	Nov 01 09:43:31 no-preload-224845 kubelet[708]: E1101 09:43:31.056160     708 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 09:43:31 no-preload-224845 kubelet[708]: E1101 09:43:31.056251     708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a237a12-aaf7-47b2-abd8-2af3fc8486e3-config-volume podName:6a237a12-aaf7-47b2-abd8-2af3fc8486e3 nodeName:}" failed. No retries permitted until 2025-11-01 09:43:33.056232089 +0000 UTC m=+6.649855973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6a237a12-aaf7-47b2-abd8-2af3fc8486e3-config-volume") pod "coredns-66bc5c9577-8qn69" (UID: "6a237a12-aaf7-47b2-abd8-2af3fc8486e3") : object "kube-system"/"coredns" not registered
	Nov 01 09:43:31 no-preload-224845 kubelet[708]: E1101 09:43:31.257451     708 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 01 09:43:31 no-preload-224845 kubelet[708]: E1101 09:43:31.257499     708 projected.go:196] Error preparing data for projected volume kube-api-access-knnxq for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 01 09:43:31 no-preload-224845 kubelet[708]: E1101 09:43:31.257578     708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e4e4413-d3b7-4a5f-b088-241e94f310a4-kube-api-access-knnxq podName:9e4e4413-d3b7-4a5f-b088-241e94f310a4 nodeName:}" failed. No retries permitted until 2025-11-01 09:43:33.257557058 +0000 UTC m=+6.851180965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-knnxq" (UniqueName: "kubernetes.io/projected/9e4e4413-d3b7-4a5f-b088-241e94f310a4-kube-api-access-knnxq") pod "busybox" (UID: "9e4e4413-d3b7-4a5f-b088-241e94f310a4") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 09:43:37 no-preload-224845 kubelet[708]: I1101 09:43:37.509693     708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:43:39 no-preload-224845 kubelet[708]: I1101 09:43:39.516115     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m99v5\" (UniqueName: \"kubernetes.io/projected/6cd9340e-f127-41fe-b84a-bf26b5741cf5-kube-api-access-m99v5\") pod \"dashboard-metrics-scraper-6ffb444bf9-xhszq\" (UID: \"6cd9340e-f127-41fe-b84a-bf26b5741cf5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq"
	Nov 01 09:43:39 no-preload-224845 kubelet[708]: I1101 09:43:39.516173     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6cd9340e-f127-41fe-b84a-bf26b5741cf5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-xhszq\" (UID: \"6cd9340e-f127-41fe-b84a-bf26b5741cf5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq"
	Nov 01 09:43:39 no-preload-224845 kubelet[708]: I1101 09:43:39.616523     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5cc5ae62-ff49-4cb6-8b46-6c99687d75e6-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fbzt6\" (UID: \"5cc5ae62-ff49-4cb6-8b46-6c99687d75e6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6"
	Nov 01 09:43:39 no-preload-224845 kubelet[708]: I1101 09:43:39.616574     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44qpj\" (UniqueName: \"kubernetes.io/projected/5cc5ae62-ff49-4cb6-8b46-6c99687d75e6-kube-api-access-44qpj\") pod \"kubernetes-dashboard-855c9754f9-fbzt6\" (UID: \"5cc5ae62-ff49-4cb6-8b46-6c99687d75e6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6"
	Nov 01 09:43:43 no-preload-224845 kubelet[708]: I1101 09:43:43.570297     708 scope.go:117] "RemoveContainer" containerID="c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f"
	Nov 01 09:43:44 no-preload-224845 kubelet[708]: I1101 09:43:44.581599     708 scope.go:117] "RemoveContainer" containerID="2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06"
	Nov 01 09:43:44 no-preload-224845 kubelet[708]: I1101 09:43:44.582578     708 scope.go:117] "RemoveContainer" containerID="c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f"
	Nov 01 09:43:44 no-preload-224845 kubelet[708]: E1101 09:43:44.584950     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xhszq_kubernetes-dashboard(6cd9340e-f127-41fe-b84a-bf26b5741cf5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq" podUID="6cd9340e-f127-41fe-b84a-bf26b5741cf5"
	Nov 01 09:43:45 no-preload-224845 kubelet[708]: I1101 09:43:45.588529     708 scope.go:117] "RemoveContainer" containerID="2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06"
	Nov 01 09:43:45 no-preload-224845 kubelet[708]: E1101 09:43:45.589435     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xhszq_kubernetes-dashboard(6cd9340e-f127-41fe-b84a-bf26b5741cf5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq" podUID="6cd9340e-f127-41fe-b84a-bf26b5741cf5"
	Nov 01 09:43:48 no-preload-224845 kubelet[708]: I1101 09:43:48.613464     708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6" podStartSLOduration=8.351460723 podStartE2EDuration="16.61344143s" podCreationTimestamp="2025-11-01 09:43:32 +0000 UTC" firstStartedPulling="2025-11-01 09:43:39.764114269 +0000 UTC m=+13.357738159" lastFinishedPulling="2025-11-01 09:43:48.026094966 +0000 UTC m=+21.619718866" observedRunningTime="2025-11-01 09:43:48.613410185 +0000 UTC m=+22.207034095" watchObservedRunningTime="2025-11-01 09:43:48.61344143 +0000 UTC m=+22.207065339"
	Nov 01 09:43:49 no-preload-224845 kubelet[708]: I1101 09:43:49.734305     708 scope.go:117] "RemoveContainer" containerID="2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06"
	Nov 01 09:43:49 no-preload-224845 kubelet[708]: E1101 09:43:49.734554     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xhszq_kubernetes-dashboard(6cd9340e-f127-41fe-b84a-bf26b5741cf5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq" podUID="6cd9340e-f127-41fe-b84a-bf26b5741cf5"
	Nov 01 09:44:00 no-preload-224845 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:44:00 no-preload-224845 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:44:00 no-preload-224845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:44:00 no-preload-224845 systemd[1]: kubelet.service: Consumed 1.449s CPU time.
	
	
	==> kubernetes-dashboard [a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59] <==
	2025/11/01 09:43:48 Using namespace: kubernetes-dashboard
	2025/11/01 09:43:48 Using in-cluster config to connect to apiserver
	2025/11/01 09:43:48 Using secret token for csrf signing
	2025/11/01 09:43:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:43:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:43:48 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:43:48 Generating JWE encryption key
	2025/11/01 09:43:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:43:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:43:48 Initializing JWE encryption key from synchronized object
	2025/11/01 09:43:48 Creating in-cluster Sidecar client
	2025/11/01 09:43:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:48 Serving insecurely on HTTP port: 9090
	2025/11/01 09:43:48 Starting overwatch
	
	
	==> storage-provisioner [12e0108d20913a3c9ee251ccc9bbbd1accfdf1c4dbf353477809b6ed74cfb5b2] <==
	I1101 09:43:29.836735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:43:29.838408       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b1d80070c7b62b4ce3881325918d5ec26625a7c59e81595495812d129d8df397] <==
	W1101 09:43:44.969542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:47.992361       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:47.997510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:43:47.997679       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:43:47.997764       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4838c3ac-e532-4877-9b16-b80d4afab202", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-224845_4bc14e65-0a5d-44a8-a864-b525c920b0d3 became leader
	I1101 09:43:47.997806       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-224845_4bc14e65-0a5d-44a8-a864-b525c920b0d3!
	W1101 09:43:48.003363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:48.007432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:43:48.098422       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-224845_4bc14e65-0a5d-44a8-a864-b525c920b0d3!
	W1101 09:43:50.010211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:50.014706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:52.018298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:52.029058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:54.033089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:54.038659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:56.043193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:56.061126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:58.064518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:58.070786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:00.074783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:00.085059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:02.089185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:02.094111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:04.098076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:04.108075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-224845 -n no-preload-224845
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-224845 -n no-preload-224845: exit status 2 (361.55628ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-224845 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-224845
helpers_test.go:243: (dbg) docker inspect no-preload-224845:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8",
	        "Created": "2025-11-01T09:41:51.624954273Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 412590,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:43:19.705192231Z",
	            "FinishedAt": "2025-11-01T09:43:18.115686706Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/hostname",
	        "HostsPath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/hosts",
	        "LogPath": "/var/lib/docker/containers/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8/968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8-json.log",
	        "Name": "/no-preload-224845",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-224845:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-224845",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "968b2e1f8788617566ba96409b0081b04130c885914d8f0742a4688cee09b1d8",
	                "LowerDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a541b9d75eb89e9504b7f06d766651f6851f9575d1e05b81374655614cb87111/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-224845",
	                "Source": "/var/lib/docker/volumes/no-preload-224845/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-224845",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-224845",
	                "name.minikube.sigs.k8s.io": "no-preload-224845",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8f4118b63dff2437fa58c274e4b3a920160812945a093da7cf07ae840e301f9d",
	            "SandboxKey": "/var/run/docker/netns/8f4118b63dff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-224845": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6e:45:c1:ed:03:dd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "fd9ea47f59972660007e0e7f49bc24269f3213f6370bb54b3108ffd5b79a05aa",
	                    "EndpointID": "119a7db29405e488f34018d729800a89e1d33982fab94506e0518fb8f5b6c07c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-224845",
	                        "968b2e1f8788"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-224845 -n no-preload-224845
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-224845 -n no-preload-224845: exit status 2 (387.467959ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-224845 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-224845 logs -n 25: (1.265522545s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-309397                                                                                                                                                                                                               │ disable-driver-mounts-309397 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-106430 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p old-k8s-version-106430 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-106430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p no-preload-224845 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ stop    │ -p embed-certs-214580 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ stop    │ -p default-k8s-diff-port-927869 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p no-preload-224845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-214580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ image   │ old-k8s-version-106430 image list --format=json                                                                                                                                                                                               │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ pause   │ -p old-k8s-version-106430 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ no-preload-224845 image list --format=json                                                                                                                                                                                                    │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p no-preload-224845 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:44:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:44:00.823522  422921 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:00.823684  422921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.823696  422921 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:00.823702  422921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.823906  422921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:00.824429  422921 out.go:368] Setting JSON to false
	I1101 09:44:00.825935  422921 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5179,"bootTime":1761985062,"procs":518,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:44:00.826062  422921 start.go:143] virtualization: kvm guest
	I1101 09:44:00.828080  422921 out.go:179] * [newest-cni-722387] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:44:00.829516  422921 notify.go:221] Checking for updates...
	I1101 09:44:00.829545  422921 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:44:00.831103  422921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:44:00.832421  422921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:00.833671  422921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:44:00.835236  422921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:44:00.836312  422921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:44:00.838662  422921 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.838859  422921 config.go:182] Loaded profile config "embed-certs-214580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.839032  422921 config.go:182] Loaded profile config "no-preload-224845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.839168  422921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:44:00.868651  422921 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:44:00.868776  422921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:00.932313  422921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:44:00.919582405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:00.932410  422921 docker.go:319] overlay module found
	I1101 09:44:00.934186  422921 out.go:179] * Using the docker driver based on user configuration
	I1101 09:44:00.935396  422921 start.go:309] selected driver: docker
	I1101 09:44:00.935426  422921 start.go:930] validating driver "docker" against <nil>
	I1101 09:44:00.935441  422921 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:44:00.936076  422921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:00.998903  422921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:44:00.988574943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:00.999261  422921 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 09:44:00.999309  422921 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 09:44:00.999988  422921 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:01.001892  422921 out.go:179] * Using Docker driver with root privileges
	I1101 09:44:01.003008  422921 cni.go:84] Creating CNI manager for ""
	I1101 09:44:01.003093  422921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:01.003109  422921 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:44:01.003194  422921 start.go:353] cluster config:
	{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:01.004455  422921 out.go:179] * Starting "newest-cni-722387" primary control-plane node in "newest-cni-722387" cluster
	I1101 09:44:01.005836  422921 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:44:01.007040  422921 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:44:01.008185  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:01.008213  422921 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:44:01.008239  422921 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:44:01.008255  422921 cache.go:59] Caching tarball of preloaded images
	I1101 09:44:01.008363  422921 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:44:01.008379  422921 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:44:01.008553  422921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:01.008588  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json: {Name:mk9b2e752fcdc3711c80d757637de7b71a85dab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:01.031509  422921 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:44:01.031532  422921 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:44:01.031549  422921 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:44:01.031586  422921 start.go:360] acquireMachinesLock for newest-cni-722387: {Name:mk940a2cf467ead4a4947b13278d9e50da243cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:44:01.031708  422921 start.go:364] duration metric: took 99.393µs to acquireMachinesLock for "newest-cni-722387"
	I1101 09:44:01.031740  422921 start.go:93] Provisioning new machine with config: &{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:44:01.031822  422921 start.go:125] createHost starting for "" (driver="docker")
	W1101 09:44:00.646338  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:02.647289  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:02.540153  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:04.540648  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:01.033898  422921 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:44:01.034155  422921 start.go:159] libmachine.API.Create for "newest-cni-722387" (driver="docker")
	I1101 09:44:01.034187  422921 client.go:173] LocalClient.Create starting
	I1101 09:44:01.034307  422921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem
	I1101 09:44:01.034359  422921 main.go:143] libmachine: Decoding PEM data...
	I1101 09:44:01.034377  422921 main.go:143] libmachine: Parsing certificate...
	I1101 09:44:01.034445  422921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem
	I1101 09:44:01.034476  422921 main.go:143] libmachine: Decoding PEM data...
	I1101 09:44:01.034491  422921 main.go:143] libmachine: Parsing certificate...
	I1101 09:44:01.034944  422921 cli_runner.go:164] Run: docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:44:01.054283  422921 cli_runner.go:211] docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:44:01.054353  422921 network_create.go:284] running [docker network inspect newest-cni-722387] to gather additional debugging logs...
	I1101 09:44:01.054368  422921 cli_runner.go:164] Run: docker network inspect newest-cni-722387
	W1101 09:44:01.073549  422921 cli_runner.go:211] docker network inspect newest-cni-722387 returned with exit code 1
	I1101 09:44:01.073579  422921 network_create.go:287] error running [docker network inspect newest-cni-722387]: docker network inspect newest-cni-722387: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-722387 not found
	I1101 09:44:01.073594  422921 network_create.go:289] output of [docker network inspect newest-cni-722387]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-722387 not found
	
	** /stderr **
	I1101 09:44:01.073692  422921 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:44:01.093393  422921 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d29bf8504a2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:cd:69:fb:c0:b7} reservation:<nil>}
	I1101 09:44:01.094218  422921 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a4cb229b081d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:6d:0e:f5:7f:54} reservation:<nil>}
	I1101 09:44:01.095202  422921 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-859d00dbc8b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:da:ec:9f:a9:b4} reservation:<nil>}
	I1101 09:44:01.095784  422921 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5df57938ba0e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:1b:ab:95:75:01} reservation:<nil>}
	I1101 09:44:01.096312  422921 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-fd9ea47f5997 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:7a:e6:71:2c:14:ef} reservation:<nil>}
	I1101 09:44:01.096837  422921 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ef396acdcfef IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:66:2e:03:68:3f:bb} reservation:<nil>}
	I1101 09:44:01.097629  422921 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1da70}
	I1101 09:44:01.097655  422921 network_create.go:124] attempt to create docker network newest-cni-722387 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1101 09:44:01.097704  422921 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-722387 newest-cni-722387
	I1101 09:44:01.177766  422921 network_create.go:108] docker network newest-cni-722387 192.168.103.0/24 created
	I1101 09:44:01.177827  422921 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-722387" container
	I1101 09:44:01.177901  422921 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:44:01.199194  422921 cli_runner.go:164] Run: docker volume create newest-cni-722387 --label name.minikube.sigs.k8s.io=newest-cni-722387 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:44:01.221436  422921 oci.go:103] Successfully created a docker volume newest-cni-722387
	I1101 09:44:01.221600  422921 cli_runner.go:164] Run: docker run --rm --name newest-cni-722387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-722387 --entrypoint /usr/bin/test -v newest-cni-722387:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:44:01.677464  422921 oci.go:107] Successfully prepared a docker volume newest-cni-722387
	I1101 09:44:01.677514  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:01.677544  422921 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:44:01.677623  422921 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-722387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 01 09:43:42 no-preload-224845 crio[563]: time="2025-11-01T09:43:42.602989283Z" level=info msg="Created container c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper" id=493aa63f-c579-45de-be7f-9fccf774b49a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:42 no-preload-224845 crio[563]: time="2025-11-01T09:43:42.603785152Z" level=info msg="Starting container: c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f" id=1a52cb10-d357-47ba-ba1c-8537603e7d22 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:42 no-preload-224845 crio[563]: time="2025-11-01T09:43:42.606019474Z" level=info msg="Started container" PID=1681 containerID=c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper id=1a52cb10-d357-47ba-ba1c-8537603e7d22 name=/runtime.v1.RuntimeService/StartContainer sandboxID=08aa98263d887e1418e429b990979f3666323643e11d0d35edaeb010611870e2
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.571395048Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d15da47f-a482-43ca-9fb0-1664c4378634 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.577560257Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=07fbe20b-9609-43c5-ac8e-f8e50c1b77b2 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.583170631Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper" id=cc25be3d-66c6-4f61-8465-17a821da2e48 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.58349018Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.595972704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.596669223Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.652477583Z" level=info msg="Created container 2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper" id=cc25be3d-66c6-4f61-8465-17a821da2e48 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.654200455Z" level=info msg="Starting container: 2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06" id=4b833659-2865-45f0-8fbc-54e9f3c2974d name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:43 no-preload-224845 crio[563]: time="2025-11-01T09:43:43.658287198Z" level=info msg="Started container" PID=1690 containerID=2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper id=4b833659-2865-45f0-8fbc-54e9f3c2974d name=/runtime.v1.RuntimeService/StartContainer sandboxID=08aa98263d887e1418e429b990979f3666323643e11d0d35edaeb010611870e2
	Nov 01 09:43:44 no-preload-224845 crio[563]: time="2025-11-01T09:43:44.588309439Z" level=info msg="Removing container: c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f" id=d14b788a-65d6-4db9-9c9d-301b88e6cabc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:44 no-preload-224845 crio[563]: time="2025-11-01T09:43:44.604928996Z" level=info msg="Removed container c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq/dashboard-metrics-scraper" id=d14b788a-65d6-4db9-9c9d-301b88e6cabc name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.024187276Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=842571c8-18e9-476e-a25d-d43eee88c0e3 name=/runtime.v1.ImageService/PullImage
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.024873983Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=54dfb8a1-fc67-4465-95dd-706f5aa26a2b name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.026903975Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=7a7b8a06-bea0-47c1-a45d-9a24efbaac39 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.031599977Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6/kubernetes-dashboard" id=d1bf37d1-7af5-467b-9773-1131fb6743d0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.031748Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.036651535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.036964021Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5c21b2a1dd2600cd50a6aa6905753c2c8eb0ecd1ae8748612da6e5ad2e0356c5/merged/etc/group: no such file or directory"
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.037399022Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.07655994Z" level=info msg="Created container a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6/kubernetes-dashboard" id=d1bf37d1-7af5-467b-9773-1131fb6743d0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.07746121Z" level=info msg="Starting container: a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59" id=3791408d-a0b3-40a1-b0d0-daa01ddbf16f name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:48 no-preload-224845 crio[563]: time="2025-11-01T09:43:48.079903826Z" level=info msg="Started container" PID=1742 containerID=a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6/kubernetes-dashboard id=3791408d-a0b3-40a1-b0d0-daa01ddbf16f name=/runtime.v1.RuntimeService/StartContainer sandboxID=9bde72670aa9b99a3907f512db88833559bedb915c624d31d72f53e71c9c0ea2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a2c0080e3c6d3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   19 seconds ago      Running             kubernetes-dashboard        0                   9bde72670aa9b       kubernetes-dashboard-855c9754f9-fbzt6        kubernetes-dashboard
	2598363a2aa16       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   1                   08aa98263d887       dashboard-metrics-scraper-6ffb444bf9-xhszq   kubernetes-dashboard
	7a65dc67a386f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           34 seconds ago      Running             coredns                     0                   987f846e1f796       coredns-66bc5c9577-8qn69                     kube-system
	549082ebfc230       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           34 seconds ago      Running             busybox                     1                   5b5f82446b88f       busybox                                      default
	b1d80070c7b62       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           37 seconds ago      Running             storage-provisioner         1                   20493fe3f4c1c       storage-provisioner                          kube-system
	46d64cccca50a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           37 seconds ago      Running             kindnet-cni                 0                   3d0dbb3c87f66       kindnet-24485                                kube-system
	ab26c6524a0b1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           37 seconds ago      Running             kube-proxy                  0                   1638ed5c88742       kube-proxy-f2f64                             kube-system
	12e0108d20913       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           37 seconds ago      Exited              storage-provisioner         0                   20493fe3f4c1c       storage-provisioner                          kube-system
	b86d55ba3c7fb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           40 seconds ago      Running             etcd                        0                   9e5b54323fef3       etcd-no-preload-224845                       kube-system
	4c03b3a94925c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           40 seconds ago      Running             kube-scheduler              0                   e4d7bd9a9555f       kube-scheduler-no-preload-224845             kube-system
	9785973a66a04       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           40 seconds ago      Running             kube-controller-manager     0                   cd0099f8d4ee2       kube-controller-manager-no-preload-224845    kube-system
	63567a1b6f89e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           40 seconds ago      Running             kube-apiserver              0                   514db294786db       kube-apiserver-no-preload-224845             kube-system
	
	
	==> coredns [7a65dc67a386fdf7a450e9872bb255dcbdafd029b0d8d23a977e370e3a0946ca] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49894 - 34673 "HINFO IN 6937972584737186280.444459367520885585. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.024140718s
	
	
	==> describe nodes <==
	Name:               no-preload-224845
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-224845
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=no-preload-224845
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_42_28_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:42:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-224845
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:43:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:43:59 +0000   Sat, 01 Nov 2025 09:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:43:59 +0000   Sat, 01 Nov 2025 09:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:43:59 +0000   Sat, 01 Nov 2025 09:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:43:59 +0000   Sat, 01 Nov 2025 09:43:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-224845
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                3cec4f20-c471-4766-a85c-05fa10e538f8
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 coredns-66bc5c9577-8qn69                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     94s
	  kube-system                 etcd-no-preload-224845                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         100s
	  kube-system                 kindnet-24485                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      94s
	  kube-system                 kube-apiserver-no-preload-224845              250m (3%)     0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-controller-manager-no-preload-224845     200m (2%)     0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 kube-proxy-f2f64                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-scheduler-no-preload-224845              100m (1%)     0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-xhszq    0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fbzt6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 93s                  kube-proxy       
	  Normal  Starting                 37s                  kube-proxy       
	  Normal  Starting                 106s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  106s (x8 over 106s)  kubelet          Node no-preload-224845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    106s (x8 over 106s)  kubelet          Node no-preload-224845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     106s (x8 over 106s)  kubelet          Node no-preload-224845 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    100s                 kubelet          Node no-preload-224845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  100s                 kubelet          Node no-preload-224845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     100s                 kubelet          Node no-preload-224845 status is now: NodeHasSufficientPID
	  Normal  Starting                 100s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           95s                  node-controller  Node no-preload-224845 event: Registered Node no-preload-224845 in Controller
	  Normal  NodeReady                81s                  kubelet          Node no-preload-224845 status is now: NodeReady
	  Normal  Starting                 41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s (x8 over 41s)    kubelet          Node no-preload-224845 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x8 over 41s)    kubelet          Node no-preload-224845 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x8 over 41s)    kubelet          Node no-preload-224845 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           35s                  node-controller  Node no-preload-224845 event: Registered Node no-preload-224845 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [b86d55ba3c7fb8a98bf1426460718ba3297031e3b104e7f3f0f54dd2191f42f9] <==
	{"level":"warn","ts":"2025-11-01T09:43:28.338023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.344182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.360820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.367282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.373645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.380373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.392961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.399744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.409083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.416179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.422281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.428778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.434849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.441263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.447895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.454382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.460653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.466577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.473236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.480108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.491106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.498313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.504537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:28.550306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37074","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:44:06.221598Z","caller":"traceutil/trace.go:172","msg":"trace[1198817213] transaction","detail":"{read_only:false; response_revision:681; number_of_response:1; }","duration":"107.146548ms","start":"2025-11-01T09:44:06.114436Z","end":"2025-11-01T09:44:06.221582Z","steps":["trace[1198817213] 'process raft request'  (duration: 107.00874ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:44:07 up  1:26,  0 user,  load average: 13.22, 6.40, 3.58
	Linux no-preload-224845 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [46d64cccca50af2f52d98c41ed6ff7c7a6488ff68d47f87aab2bc38588c05601] <==
	I1101 09:43:29.991896       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:43:29.992289       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1101 09:43:29.992454       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:43:29.992475       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:43:29.992506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:43:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:43:30.283252       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:43:30.283284       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:43:30.283314       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:43:30.382821       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:43:30.783999       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:43:30.784029       1 metrics.go:72] Registering metrics
	I1101 09:43:30.784083       1 controller.go:711] "Syncing nftables rules"
	I1101 09:43:40.283535       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:43:40.283731       1 main.go:301] handling current node
	I1101 09:43:50.284184       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:43:50.284225       1 main.go:301] handling current node
	I1101 09:44:00.283123       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1101 09:44:00.283173       1 main.go:301] handling current node
	
	
	==> kube-apiserver [63567a1b6f89e42a1a75739ad77e7056590c8aca6daed13ac49ddcd23f14e41b] <==
	I1101 09:43:29.018199       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:43:29.020220       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:43:29.020289       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:43:29.020354       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:43:29.020416       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:43:29.020448       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:43:29.020470       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:43:29.020478       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:43:29.020485       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:43:29.020606       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:43:29.020609       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:43:29.026536       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:43:29.026834       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:43:29.039184       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:43:29.302783       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:43:29.331634       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:43:29.351776       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:43:29.363371       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:43:29.371058       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:43:29.408289       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.59.131"}
	I1101 09:43:29.420176       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.81.201"}
	I1101 09:43:29.920022       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:43:32.386735       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:43:32.736860       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:43:32.788669       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [9785973a66a04a46a343a8714f4f3ed4c9a422a4aa4c161fb46c9bc2f6bb8b09] <==
	I1101 09:43:32.384327       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:43:32.384347       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:43:32.384363       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:43:32.384388       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:43:32.384418       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:43:32.384501       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:43:32.384564       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:43:32.384587       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:43:32.384663       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:43:32.385630       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:43:32.385655       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:43:32.385692       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:43:32.388942       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:43:32.389029       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:43:32.389067       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1101 09:43:32.389117       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1101 09:43:32.389156       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1101 09:43:32.389165       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1101 09:43:32.389176       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1101 09:43:32.390369       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:43:32.391076       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:43:32.393237       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:43:32.395535       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:43:32.410007       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:43:42.336110       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ab26c6524a0b1a9d55101400960c2a25417db0dca853ddac83d92c109aa50030] <==
	I1101 09:43:29.866394       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:43:29.952419       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:43:30.053111       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:43:30.053164       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1101 09:43:30.053268       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:43:30.074857       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:43:30.074943       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:43:30.082024       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:43:30.082546       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:43:30.082580       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:30.085903       1 config.go:200] "Starting service config controller"
	I1101 09:43:30.085943       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:43:30.086152       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:43:30.086176       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:43:30.086171       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:43:30.086196       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:43:30.086767       1 config.go:309] "Starting node config controller"
	I1101 09:43:30.086849       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:43:30.186127       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:43:30.186762       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:43:30.187974       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:43:30.187992       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [4c03b3a94925c1523c1521759ebdf9c97a75c228e24c8af2033855424a4b0819] <==
	I1101 09:43:27.315799       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:43:28.993304       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:43:28.993418       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:29.001863       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:43:29.001926       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:43:29.002002       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:29.002027       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:29.002085       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:43:29.002136       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:43:29.002321       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:43:29.002581       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:43:29.102338       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:29.102362       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:43:29.102362       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:43:30 no-preload-224845 kubelet[708]: E1101 09:43:30.251365     708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e4e4413-d3b7-4a5f-b088-241e94f310a4-kube-api-access-knnxq podName:9e4e4413-d3b7-4a5f-b088-241e94f310a4 nodeName:}" failed. No retries permitted until 2025-11-01 09:43:31.251348572 +0000 UTC m=+4.844972456 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-knnxq" (UniqueName: "kubernetes.io/projected/9e4e4413-d3b7-4a5f-b088-241e94f310a4-kube-api-access-knnxq") pod "busybox" (UID: "9e4e4413-d3b7-4a5f-b088-241e94f310a4") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 09:43:30 no-preload-224845 kubelet[708]: I1101 09:43:30.521455     708 scope.go:117] "RemoveContainer" containerID="12e0108d20913a3c9ee251ccc9bbbd1accfdf1c4dbf353477809b6ed74cfb5b2"
	Nov 01 09:43:31 no-preload-224845 kubelet[708]: E1101 09:43:31.056160     708 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 01 09:43:31 no-preload-224845 kubelet[708]: E1101 09:43:31.056251     708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a237a12-aaf7-47b2-abd8-2af3fc8486e3-config-volume podName:6a237a12-aaf7-47b2-abd8-2af3fc8486e3 nodeName:}" failed. No retries permitted until 2025-11-01 09:43:33.056232089 +0000 UTC m=+6.649855973 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6a237a12-aaf7-47b2-abd8-2af3fc8486e3-config-volume") pod "coredns-66bc5c9577-8qn69" (UID: "6a237a12-aaf7-47b2-abd8-2af3fc8486e3") : object "kube-system"/"coredns" not registered
	Nov 01 09:43:31 no-preload-224845 kubelet[708]: E1101 09:43:31.257451     708 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 01 09:43:31 no-preload-224845 kubelet[708]: E1101 09:43:31.257499     708 projected.go:196] Error preparing data for projected volume kube-api-access-knnxq for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 01 09:43:31 no-preload-224845 kubelet[708]: E1101 09:43:31.257578     708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9e4e4413-d3b7-4a5f-b088-241e94f310a4-kube-api-access-knnxq podName:9e4e4413-d3b7-4a5f-b088-241e94f310a4 nodeName:}" failed. No retries permitted until 2025-11-01 09:43:33.257557058 +0000 UTC m=+6.851180965 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-knnxq" (UniqueName: "kubernetes.io/projected/9e4e4413-d3b7-4a5f-b088-241e94f310a4-kube-api-access-knnxq") pod "busybox" (UID: "9e4e4413-d3b7-4a5f-b088-241e94f310a4") : object "default"/"kube-root-ca.crt" not registered
	Nov 01 09:43:37 no-preload-224845 kubelet[708]: I1101 09:43:37.509693     708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:43:39 no-preload-224845 kubelet[708]: I1101 09:43:39.516115     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m99v5\" (UniqueName: \"kubernetes.io/projected/6cd9340e-f127-41fe-b84a-bf26b5741cf5-kube-api-access-m99v5\") pod \"dashboard-metrics-scraper-6ffb444bf9-xhszq\" (UID: \"6cd9340e-f127-41fe-b84a-bf26b5741cf5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq"
	Nov 01 09:43:39 no-preload-224845 kubelet[708]: I1101 09:43:39.516173     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6cd9340e-f127-41fe-b84a-bf26b5741cf5-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-xhszq\" (UID: \"6cd9340e-f127-41fe-b84a-bf26b5741cf5\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq"
	Nov 01 09:43:39 no-preload-224845 kubelet[708]: I1101 09:43:39.616523     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5cc5ae62-ff49-4cb6-8b46-6c99687d75e6-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-fbzt6\" (UID: \"5cc5ae62-ff49-4cb6-8b46-6c99687d75e6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6"
	Nov 01 09:43:39 no-preload-224845 kubelet[708]: I1101 09:43:39.616574     708 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44qpj\" (UniqueName: \"kubernetes.io/projected/5cc5ae62-ff49-4cb6-8b46-6c99687d75e6-kube-api-access-44qpj\") pod \"kubernetes-dashboard-855c9754f9-fbzt6\" (UID: \"5cc5ae62-ff49-4cb6-8b46-6c99687d75e6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6"
	Nov 01 09:43:43 no-preload-224845 kubelet[708]: I1101 09:43:43.570297     708 scope.go:117] "RemoveContainer" containerID="c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f"
	Nov 01 09:43:44 no-preload-224845 kubelet[708]: I1101 09:43:44.581599     708 scope.go:117] "RemoveContainer" containerID="2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06"
	Nov 01 09:43:44 no-preload-224845 kubelet[708]: I1101 09:43:44.582578     708 scope.go:117] "RemoveContainer" containerID="c4c55dc058226a6ad34a91be3a31ffebdf1e1b108c4c699177e3e12a4e3ff24f"
	Nov 01 09:43:44 no-preload-224845 kubelet[708]: E1101 09:43:44.584950     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xhszq_kubernetes-dashboard(6cd9340e-f127-41fe-b84a-bf26b5741cf5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq" podUID="6cd9340e-f127-41fe-b84a-bf26b5741cf5"
	Nov 01 09:43:45 no-preload-224845 kubelet[708]: I1101 09:43:45.588529     708 scope.go:117] "RemoveContainer" containerID="2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06"
	Nov 01 09:43:45 no-preload-224845 kubelet[708]: E1101 09:43:45.589435     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xhszq_kubernetes-dashboard(6cd9340e-f127-41fe-b84a-bf26b5741cf5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq" podUID="6cd9340e-f127-41fe-b84a-bf26b5741cf5"
	Nov 01 09:43:48 no-preload-224845 kubelet[708]: I1101 09:43:48.613464     708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbzt6" podStartSLOduration=8.351460723 podStartE2EDuration="16.61344143s" podCreationTimestamp="2025-11-01 09:43:32 +0000 UTC" firstStartedPulling="2025-11-01 09:43:39.764114269 +0000 UTC m=+13.357738159" lastFinishedPulling="2025-11-01 09:43:48.026094966 +0000 UTC m=+21.619718866" observedRunningTime="2025-11-01 09:43:48.613410185 +0000 UTC m=+22.207034095" watchObservedRunningTime="2025-11-01 09:43:48.61344143 +0000 UTC m=+22.207065339"
	Nov 01 09:43:49 no-preload-224845 kubelet[708]: I1101 09:43:49.734305     708 scope.go:117] "RemoveContainer" containerID="2598363a2aa16b0d356f58d531b41fa77de810ae552da0dd8ff6d3b7b3f95c06"
	Nov 01 09:43:49 no-preload-224845 kubelet[708]: E1101 09:43:49.734554     708 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xhszq_kubernetes-dashboard(6cd9340e-f127-41fe-b84a-bf26b5741cf5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xhszq" podUID="6cd9340e-f127-41fe-b84a-bf26b5741cf5"
	Nov 01 09:44:00 no-preload-224845 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:44:00 no-preload-224845 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:44:00 no-preload-224845 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:44:00 no-preload-224845 systemd[1]: kubelet.service: Consumed 1.449s CPU time.
	
	
	==> kubernetes-dashboard [a2c0080e3c6d3e880d945689bfcd19fcb4747c5123b285ad65c324e333afaf59] <==
	2025/11/01 09:43:48 Starting overwatch
	2025/11/01 09:43:48 Using namespace: kubernetes-dashboard
	2025/11/01 09:43:48 Using in-cluster config to connect to apiserver
	2025/11/01 09:43:48 Using secret token for csrf signing
	2025/11/01 09:43:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:43:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:43:48 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:43:48 Generating JWE encryption key
	2025/11/01 09:43:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:43:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:43:48 Initializing JWE encryption key from synchronized object
	2025/11/01 09:43:48 Creating in-cluster Sidecar client
	2025/11/01 09:43:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:48 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [12e0108d20913a3c9ee251ccc9bbbd1accfdf1c4dbf353477809b6ed74cfb5b2] <==
	I1101 09:43:29.836735       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:43:29.838408       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [b1d80070c7b62b4ce3881325918d5ec26625a7c59e81595495812d129d8df397] <==
	W1101 09:43:47.997510       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:43:47.997679       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1101 09:43:47.997764       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4838c3ac-e532-4877-9b16-b80d4afab202", APIVersion:"v1", ResourceVersion:"653", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-224845_4bc14e65-0a5d-44a8-a864-b525c920b0d3 became leader
	I1101 09:43:47.997806       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-224845_4bc14e65-0a5d-44a8-a864-b525c920b0d3!
	W1101 09:43:48.003363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:48.007432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1101 09:43:48.098422       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-224845_4bc14e65-0a5d-44a8-a864-b525c920b0d3!
	W1101 09:43:50.010211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:50.014706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:52.018298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:52.029058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:54.033089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:54.038659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:56.043193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:56.061126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:58.064518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:43:58.070786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:00.074783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:00.085059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:02.089185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:02.094111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:04.098076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:04.108075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:06.111541       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:06.222847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-224845 -n no-preload-224845
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-224845 -n no-preload-224845: exit status 2 (359.924401ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-224845 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (8.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-722387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-722387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (260.774319ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-722387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-722387
helpers_test.go:243: (dbg) docker inspect newest-cni-722387:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6",
	        "Created": "2025-11-01T09:44:06.484487044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 424355,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:44:06.527051507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/hosts",
	        "LogPath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6-json.log",
	        "Name": "/newest-cni-722387",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-722387:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-722387",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6",
	                "LowerDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-722387",
	                "Source": "/var/lib/docker/volumes/newest-cni-722387/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-722387",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-722387",
	                "name.minikube.sigs.k8s.io": "newest-cni-722387",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df87f05ef519b1bde41ecfbd0f030eb09f16421fae9342f7f72757ef2f9f1b92",
	            "SandboxKey": "/var/run/docker/netns/df87f05ef519",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-722387": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9e:4f:bf:9e:1f:4c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "097f5920ceba29035e623d66cf12db8333915593551ced6800060e5546bfb0e0",
	                    "EndpointID": "03285f2cd4eaee206f7670218719f81ae3e98e0bd382731b22a4a7157239a610",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-722387",
	                        "5cc4aeec7217"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-722387 -n newest-cni-722387
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-722387 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-106430 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-106430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:42 UTC │
	│ start   │ -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p no-preload-224845 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ stop    │ -p embed-certs-214580 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ stop    │ -p default-k8s-diff-port-927869 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p no-preload-224845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-214580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ old-k8s-version-106430 image list --format=json                                                                                                                                                                                               │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ pause   │ -p old-k8s-version-106430 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ no-preload-224845 image list --format=json                                                                                                                                                                                                    │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p no-preload-224845 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-722387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:44:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:44:00.823522  422921 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:00.823684  422921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.823696  422921 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:00.823702  422921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.823906  422921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:00.824429  422921 out.go:368] Setting JSON to false
	I1101 09:44:00.825935  422921 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5179,"bootTime":1761985062,"procs":518,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:44:00.826062  422921 start.go:143] virtualization: kvm guest
	I1101 09:44:00.828080  422921 out.go:179] * [newest-cni-722387] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:44:00.829516  422921 notify.go:221] Checking for updates...
	I1101 09:44:00.829545  422921 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:44:00.831103  422921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:44:00.832421  422921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:00.833671  422921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:44:00.835236  422921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:44:00.836312  422921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:44:00.838662  422921 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.838859  422921 config.go:182] Loaded profile config "embed-certs-214580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.839032  422921 config.go:182] Loaded profile config "no-preload-224845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.839168  422921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:44:00.868651  422921 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:44:00.868776  422921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:00.932313  422921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:44:00.919582405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:00.932410  422921 docker.go:319] overlay module found
	I1101 09:44:00.934186  422921 out.go:179] * Using the docker driver based on user configuration
	I1101 09:44:00.935396  422921 start.go:309] selected driver: docker
	I1101 09:44:00.935426  422921 start.go:930] validating driver "docker" against <nil>
	I1101 09:44:00.935441  422921 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:44:00.936076  422921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:00.998903  422921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:44:00.988574943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:00.999261  422921 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 09:44:00.999309  422921 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 09:44:00.999988  422921 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:01.001892  422921 out.go:179] * Using Docker driver with root privileges
	I1101 09:44:01.003008  422921 cni.go:84] Creating CNI manager for ""
	I1101 09:44:01.003093  422921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:01.003109  422921 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:44:01.003194  422921 start.go:353] cluster config:
	{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:01.004455  422921 out.go:179] * Starting "newest-cni-722387" primary control-plane node in "newest-cni-722387" cluster
	I1101 09:44:01.005836  422921 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:44:01.007040  422921 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:44:01.008185  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:01.008213  422921 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:44:01.008239  422921 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:44:01.008255  422921 cache.go:59] Caching tarball of preloaded images
	I1101 09:44:01.008363  422921 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:44:01.008379  422921 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:44:01.008553  422921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:01.008588  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json: {Name:mk9b2e752fcdc3711c80d757637de7b71a85dab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:01.031509  422921 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:44:01.031532  422921 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:44:01.031549  422921 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:44:01.031586  422921 start.go:360] acquireMachinesLock for newest-cni-722387: {Name:mk940a2cf467ead4a4947b13278d9e50da243cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:44:01.031708  422921 start.go:364] duration metric: took 99.393µs to acquireMachinesLock for "newest-cni-722387"
	I1101 09:44:01.031740  422921 start.go:93] Provisioning new machine with config: &{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:44:01.031822  422921 start.go:125] createHost starting for "" (driver="docker")
	W1101 09:44:00.646338  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:02.647289  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:02.540153  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:04.540648  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:01.033898  422921 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:44:01.034155  422921 start.go:159] libmachine.API.Create for "newest-cni-722387" (driver="docker")
	I1101 09:44:01.034187  422921 client.go:173] LocalClient.Create starting
	I1101 09:44:01.034307  422921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem
	I1101 09:44:01.034359  422921 main.go:143] libmachine: Decoding PEM data...
	I1101 09:44:01.034377  422921 main.go:143] libmachine: Parsing certificate...
	I1101 09:44:01.034445  422921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem
	I1101 09:44:01.034476  422921 main.go:143] libmachine: Decoding PEM data...
	I1101 09:44:01.034491  422921 main.go:143] libmachine: Parsing certificate...
	I1101 09:44:01.034944  422921 cli_runner.go:164] Run: docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:44:01.054283  422921 cli_runner.go:211] docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:44:01.054353  422921 network_create.go:284] running [docker network inspect newest-cni-722387] to gather additional debugging logs...
	I1101 09:44:01.054368  422921 cli_runner.go:164] Run: docker network inspect newest-cni-722387
	W1101 09:44:01.073549  422921 cli_runner.go:211] docker network inspect newest-cni-722387 returned with exit code 1
	I1101 09:44:01.073579  422921 network_create.go:287] error running [docker network inspect newest-cni-722387]: docker network inspect newest-cni-722387: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-722387 not found
	I1101 09:44:01.073594  422921 network_create.go:289] output of [docker network inspect newest-cni-722387]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-722387 not found
	
	** /stderr **
	I1101 09:44:01.073692  422921 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:44:01.093393  422921 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d29bf8504a2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:cd:69:fb:c0:b7} reservation:<nil>}
	I1101 09:44:01.094218  422921 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a4cb229b081d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:6d:0e:f5:7f:54} reservation:<nil>}
	I1101 09:44:01.095202  422921 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-859d00dbc8b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:da:ec:9f:a9:b4} reservation:<nil>}
	I1101 09:44:01.095784  422921 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5df57938ba0e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:1b:ab:95:75:01} reservation:<nil>}
	I1101 09:44:01.096312  422921 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-fd9ea47f5997 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:7a:e6:71:2c:14:ef} reservation:<nil>}
	I1101 09:44:01.096837  422921 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ef396acdcfef IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:66:2e:03:68:3f:bb} reservation:<nil>}
	I1101 09:44:01.097629  422921 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1da70}
	I1101 09:44:01.097655  422921 network_create.go:124] attempt to create docker network newest-cni-722387 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1101 09:44:01.097704  422921 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-722387 newest-cni-722387
	I1101 09:44:01.177766  422921 network_create.go:108] docker network newest-cni-722387 192.168.103.0/24 created
	I1101 09:44:01.177827  422921 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-722387" container
	I1101 09:44:01.177901  422921 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:44:01.199194  422921 cli_runner.go:164] Run: docker volume create newest-cni-722387 --label name.minikube.sigs.k8s.io=newest-cni-722387 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:44:01.221436  422921 oci.go:103] Successfully created a docker volume newest-cni-722387
	I1101 09:44:01.221600  422921 cli_runner.go:164] Run: docker run --rm --name newest-cni-722387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-722387 --entrypoint /usr/bin/test -v newest-cni-722387:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:44:01.677464  422921 oci.go:107] Successfully prepared a docker volume newest-cni-722387
	I1101 09:44:01.677514  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:01.677544  422921 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:44:01.677623  422921 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-722387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 09:44:05.146749  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:07.148682  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:09.647053  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:07.041096  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:09.539685  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:06.398607  422921 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-722387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.720922389s)
	I1101 09:44:06.398651  422921 kic.go:203] duration metric: took 4.721100224s to extract preloaded images to volume ...
	W1101 09:44:06.398758  422921 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:44:06.398800  422921 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:44:06.398852  422921 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:44:06.465541  422921 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-722387 --name newest-cni-722387 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-722387 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-722387 --network newest-cni-722387 --ip 192.168.103.2 --volume newest-cni-722387:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:44:06.805538  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Running}}
	I1101 09:44:06.828089  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:06.851749  422921 cli_runner.go:164] Run: docker exec newest-cni-722387 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:44:06.904120  422921 oci.go:144] the created container "newest-cni-722387" has a running status.
	I1101 09:44:06.904157  422921 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa...
	I1101 09:44:07.001848  422921 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:44:07.037979  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:07.065271  422921 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:44:07.065301  422921 kic_runner.go:114] Args: [docker exec --privileged newest-cni-722387 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:44:07.117749  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:07.141576  422921 machine.go:94] provisionDockerMachine start ...
	I1101 09:44:07.141692  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:07.170345  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:07.170754  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:07.171294  422921 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:44:07.329839  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-722387
	
	I1101 09:44:07.329873  422921 ubuntu.go:182] provisioning hostname "newest-cni-722387"
	I1101 09:44:07.329971  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:07.351602  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:07.351850  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:07.351866  422921 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-722387 && echo "newest-cni-722387" | sudo tee /etc/hostname
	I1101 09:44:07.513163  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-722387
	
	I1101 09:44:07.513257  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:07.536121  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:07.536418  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:07.536455  422921 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-722387' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-722387/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-722387' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:44:07.690179  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:44:07.690213  422921 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:44:07.690235  422921 ubuntu.go:190] setting up certificates
	I1101 09:44:07.690247  422921 provision.go:84] configureAuth start
	I1101 09:44:07.690303  422921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:07.710388  422921 provision.go:143] copyHostCerts
	I1101 09:44:07.710461  422921 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:44:07.710477  422921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:44:07.710559  422921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:44:07.710683  422921 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:44:07.710693  422921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:44:07.710734  422921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:44:07.710817  422921 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:44:07.710827  422921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:44:07.710863  422921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:44:07.710954  422921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.newest-cni-722387 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-722387]
	I1101 09:44:08.842065  422921 provision.go:177] copyRemoteCerts
	I1101 09:44:08.842134  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:44:08.842180  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:08.862777  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:08.967012  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:44:08.987471  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:44:09.005392  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:44:09.024170  422921 provision.go:87] duration metric: took 1.333906879s to configureAuth
	I1101 09:44:09.024208  422921 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:44:09.024391  422921 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:09.024511  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.046693  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:09.046953  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:09.046976  422921 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:44:09.318902  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:44:09.318969  422921 machine.go:97] duration metric: took 2.177370299s to provisionDockerMachine
	I1101 09:44:09.318981  422921 client.go:176] duration metric: took 8.284787176s to LocalClient.Create
	I1101 09:44:09.319007  422921 start.go:167] duration metric: took 8.284854636s to libmachine.API.Create "newest-cni-722387"
	I1101 09:44:09.319021  422921 start.go:293] postStartSetup for "newest-cni-722387" (driver="docker")
	I1101 09:44:09.319035  422921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:44:09.319106  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:44:09.319169  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.339325  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.443792  422921 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:44:09.447954  422921 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:44:09.447981  422921 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:44:09.448002  422921 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:44:09.448066  422921 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:44:09.448161  422921 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:44:09.448269  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:44:09.457217  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:44:09.478393  422921 start.go:296] duration metric: took 159.356449ms for postStartSetup
	I1101 09:44:09.478781  422921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:09.497615  422921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:09.497880  422921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:44:09.497971  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.516558  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.616534  422921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:44:09.621184  422921 start.go:128] duration metric: took 8.589348109s to createHost
	I1101 09:44:09.621206  422921 start.go:83] releasing machines lock for "newest-cni-722387", held for 8.589483705s
	I1101 09:44:09.621261  422921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:09.639137  422921 ssh_runner.go:195] Run: cat /version.json
	I1101 09:44:09.639152  422921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:44:09.639193  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.639227  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.659576  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.660064  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.813835  422921 ssh_runner.go:195] Run: systemctl --version
	I1101 09:44:09.820702  422921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:44:09.859165  422921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:44:09.863899  422921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:44:09.863990  422921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:44:09.891587  422921 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:44:09.891611  422921 start.go:496] detecting cgroup driver to use...
	I1101 09:44:09.891642  422921 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:44:09.891685  422921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:44:09.908170  422921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:44:09.920701  422921 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:44:09.920762  422921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:44:09.939277  422921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:44:09.958203  422921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:44:10.041329  422921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:44:10.138596  422921 docker.go:234] disabling docker service ...
	I1101 09:44:10.138674  422921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:44:10.162388  422921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:44:10.183310  422921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:44:10.277717  422921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:44:10.364259  422921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:44:10.377455  422921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:44:10.392986  422921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:44:10.393061  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.404147  422921 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:44:10.404225  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.414290  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.424717  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.434248  422921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:44:10.444846  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.466459  422921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.491214  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.504176  422921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:44:10.514111  422921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:44:10.522737  422921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:10.603998  422921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:44:10.745956  422921 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:44:10.746039  422921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:44:10.750485  422921 start.go:564] Will wait 60s for crictl version
	I1101 09:44:10.750549  422921 ssh_runner.go:195] Run: which crictl
	I1101 09:44:10.754696  422921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:44:10.782770  422921 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:44:10.782858  422921 ssh_runner.go:195] Run: crio --version
	I1101 09:44:10.814129  422921 ssh_runner.go:195] Run: crio --version
	I1101 09:44:10.842831  422921 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:44:10.845742  422921 cli_runner.go:164] Run: docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:44:10.865977  422921 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 09:44:10.870737  422921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:44:10.886253  422921 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 09:44:10.888153  422921 kubeadm.go:884] updating cluster {Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:44:10.888347  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:10.888429  422921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:44:10.923317  422921 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:44:10.923339  422921 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:44:10.923383  422921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:44:10.954698  422921 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:44:10.954725  422921 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:44:10.954734  422921 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1101 09:44:10.954838  422921 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-722387 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:44:10.954936  422921 ssh_runner.go:195] Run: crio config
	I1101 09:44:11.004449  422921 cni.go:84] Creating CNI manager for ""
	I1101 09:44:11.004473  422921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:11.004494  422921 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 09:44:11.004527  422921 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-722387 NodeName:newest-cni-722387 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:44:11.004682  422921 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-722387"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:44:11.004760  422921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:44:11.014530  422921 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:44:11.014605  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:44:11.024541  422921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 09:44:11.040294  422921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:44:11.062728  422921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 09:44:11.077519  422921 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:44:11.081688  422921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:44:11.094974  422921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:11.196048  422921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:44:11.218864  422921 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387 for IP: 192.168.103.2
	I1101 09:44:11.218886  422921 certs.go:195] generating shared ca certs ...
	I1101 09:44:11.218905  422921 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.219079  422921 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:44:11.219129  422921 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:44:11.219137  422921 certs.go:257] generating profile certs ...
	I1101 09:44:11.219206  422921 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.key
	I1101 09:44:11.219226  422921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.crt with IP's: []
	I1101 09:44:11.461428  422921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.crt ...
	I1101 09:44:11.461455  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.crt: {Name:mka26fe91724530410954f0cb0f760186d382fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.461645  422921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.key ...
	I1101 09:44:11.461660  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.key: {Name:mkfd8769aff14fe4cbc98be403d7018408109ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.462191  422921 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae
	I1101 09:44:11.462211  422921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1101 09:44:11.868383  422921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae ...
	I1101 09:44:11.868418  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae: {Name:mk4f413fba17a26ebf9c87bc9593ce90dfb89ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.868634  422921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae ...
	I1101 09:44:11.868654  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae: {Name:mk80e9deceb79e9196c5e16230d90849359b0914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.868766  422921 certs.go:382] copying /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae -> /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt
	I1101 09:44:11.868896  422921 certs.go:386] copying /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae -> /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key
	I1101 09:44:11.869007  422921 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key
	I1101 09:44:11.869030  422921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt with IP's: []
	I1101 09:44:12.000122  422921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt ...
	I1101 09:44:12.000158  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt: {Name:mk2ad63222a5177d2492cb7d1ba84a51f7e11b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:12.000356  422921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key ...
	I1101 09:44:12.000380  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key: {Name:mk162ecceb80bafb66ce5e25b61bc5c04bab15ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:12.000606  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:44:12.000657  422921 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:44:12.000669  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:44:12.000700  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:44:12.000738  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:44:12.000771  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:44:12.000830  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:44:12.001938  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:44:12.022650  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:44:12.042868  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:44:12.061963  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:44:12.080887  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:44:12.100615  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:44:12.120330  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:44:12.141267  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:44:12.163343  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:44:12.186827  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:44:12.208773  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:44:12.230567  422921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:44:12.246541  422921 ssh_runner.go:195] Run: openssl version
	I1101 09:44:12.253877  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:44:12.264673  422921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:12.270201  422921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:12.270261  422921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:12.306242  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:44:12.315679  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:44:12.325013  422921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:44:12.329129  422921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:44:12.329188  422921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:44:12.371362  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:44:12.380856  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:44:12.390756  422921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:44:12.394977  422921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:44:12.395045  422921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:44:12.432336  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:44:12.442783  422921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:44:12.446834  422921 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:44:12.446922  422921 kubeadm.go:401] StartCluster: {Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:12.447025  422921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:44:12.447084  422921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:44:12.476859  422921 cri.go:89] found id: ""
	I1101 09:44:12.476960  422921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:44:12.485947  422921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:44:12.495118  422921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:44:12.495191  422921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:44:12.503837  422921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:44:12.503858  422921 kubeadm.go:158] found existing configuration files:
	
	I1101 09:44:12.503904  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:44:12.513028  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:44:12.513123  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:44:12.521312  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:44:12.529973  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:44:12.530061  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:44:12.539012  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:44:12.547839  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:44:12.547902  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:44:12.555437  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:44:12.563284  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:44:12.563337  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:44:12.571308  422921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:44:12.610706  422921 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:44:12.610775  422921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:44:12.633149  422921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:44:12.633212  422921 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:44:12.633239  422921 kubeadm.go:319] OS: Linux
	I1101 09:44:12.633299  422921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:44:12.633366  422921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:44:12.633462  422921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:44:12.633544  422921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:44:12.633643  422921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:44:12.633730  422921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:44:12.633795  422921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:44:12.633869  422921 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:44:12.697497  422921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:44:12.697699  422921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:44:12.697851  422921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:44:12.706690  422921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1101 09:44:11.647095  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:14.146846  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:12.040114  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:14.539447  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:12.709509  422921 out.go:252]   - Generating certificates and keys ...
	I1101 09:44:12.709606  422921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:44:12.709733  422921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:44:12.854194  422921 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:44:13.380816  422921 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:44:13.429834  422921 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:44:13.579950  422921 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:44:13.897795  422921 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:44:13.898003  422921 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-722387] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:44:14.063204  422921 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:44:14.063358  422921 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-722387] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:44:14.595857  422921 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:44:14.864904  422921 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:44:14.974817  422921 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:44:14.974965  422921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:44:15.411035  422921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:44:15.738220  422921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:44:16.163033  422921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:44:16.379713  422921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:44:16.656283  422921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:44:16.656703  422921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:44:16.660761  422921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1101 09:44:16.646757  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	I1101 09:44:18.647241  415212 pod_ready.go:94] pod "coredns-66bc5c9577-cmnj8" is "Ready"
	I1101 09:44:18.647288  415212 pod_ready.go:86] duration metric: took 32.006837487s for pod "coredns-66bc5c9577-cmnj8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.650658  415212 pod_ready.go:83] waiting for pod "etcd-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.655619  415212 pod_ready.go:94] pod "etcd-embed-certs-214580" is "Ready"
	I1101 09:44:18.655650  415212 pod_ready.go:86] duration metric: took 4.963735ms for pod "etcd-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.658523  415212 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.664301  415212 pod_ready.go:94] pod "kube-apiserver-embed-certs-214580" is "Ready"
	I1101 09:44:18.664329  415212 pod_ready.go:86] duration metric: took 5.774053ms for pod "kube-apiserver-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.666532  415212 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.845825  415212 pod_ready.go:94] pod "kube-controller-manager-embed-certs-214580" is "Ready"
	I1101 09:44:18.845858  415212 pod_ready.go:86] duration metric: took 179.302458ms for pod "kube-controller-manager-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:19.045094  415212 pod_ready.go:83] waiting for pod "kube-proxy-49j45" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:19.445427  415212 pod_ready.go:94] pod "kube-proxy-49j45" is "Ready"
	I1101 09:44:19.445528  415212 pod_ready.go:86] duration metric: took 400.403346ms for pod "kube-proxy-49j45" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:19.645441  415212 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:20.044873  415212 pod_ready.go:94] pod "kube-scheduler-embed-certs-214580" is "Ready"
	I1101 09:44:20.044906  415212 pod_ready.go:86] duration metric: took 399.441ms for pod "kube-scheduler-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:20.044958  415212 pod_ready.go:40] duration metric: took 33.416946486s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:44:20.092487  415212 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:44:20.094263  415212 out.go:179] * Done! kubectl is now configured to use "embed-certs-214580" cluster and "default" namespace by default
	W1101 09:44:17.039371  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:19.039784  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:16.662416  422921 out.go:252]   - Booting up control plane ...
	I1101 09:44:16.662552  422921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:44:16.662673  422921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:44:16.663362  422921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:44:16.678425  422921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:44:16.678561  422921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:44:16.687747  422921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:44:16.688059  422921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:44:16.688132  422921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:44:16.797757  422921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:44:16.797944  422921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:44:17.299548  422921 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.906008ms
	I1101 09:44:17.303198  422921 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:44:17.303364  422921 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1101 09:44:17.303521  422921 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:44:17.303650  422921 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:44:18.761873  422921 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.458638771s
	I1101 09:44:19.840176  422921 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.53698671s
	I1101 09:44:21.304575  422921 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001294509s
	I1101 09:44:21.315500  422921 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:44:21.326477  422921 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:44:21.336740  422921 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:44:21.337047  422921 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-722387 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:44:21.345651  422921 kubeadm.go:319] [bootstrap-token] Using token: hcqanb.hb6jvis691nmk76a
	I1101 09:44:21.347127  422921 out.go:252]   - Configuring RBAC rules ...
	I1101 09:44:21.347291  422921 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:44:21.350981  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:44:21.357244  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:44:21.360342  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:44:21.364309  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:44:21.367361  422921 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:44:21.712031  422921 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:44:22.130627  422921 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:44:22.711154  422921 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:44:22.712086  422921 kubeadm.go:319] 
	I1101 09:44:22.712150  422921 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:44:22.712177  422921 kubeadm.go:319] 
	I1101 09:44:22.712290  422921 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:44:22.712300  422921 kubeadm.go:319] 
	I1101 09:44:22.712336  422921 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:44:22.712412  422921 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:44:22.712476  422921 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:44:22.712501  422921 kubeadm.go:319] 
	I1101 09:44:22.712589  422921 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:44:22.712599  422921 kubeadm.go:319] 
	I1101 09:44:22.712661  422921 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:44:22.712670  422921 kubeadm.go:319] 
	I1101 09:44:22.712714  422921 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:44:22.712814  422921 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:44:22.712954  422921 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:44:22.712966  422921 kubeadm.go:319] 
	I1101 09:44:22.713076  422921 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:44:22.713147  422921 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:44:22.713153  422921 kubeadm.go:319] 
	I1101 09:44:22.713233  422921 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hcqanb.hb6jvis691nmk76a \
	I1101 09:44:22.713362  422921 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 \
	I1101 09:44:22.713400  422921 kubeadm.go:319] 	--control-plane 
	I1101 09:44:22.713409  422921 kubeadm.go:319] 
	I1101 09:44:22.713509  422921 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:44:22.713529  422921 kubeadm.go:319] 
	I1101 09:44:22.713633  422921 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hcqanb.hb6jvis691nmk76a \
	I1101 09:44:22.713766  422921 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 
	I1101 09:44:22.716577  422921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:44:22.716763  422921 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:44:22.716802  422921 cni.go:84] Creating CNI manager for ""
	I1101 09:44:22.716816  422921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:22.719006  422921 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 09:44:21.539320  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:23.539726  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:24.539003  415823 pod_ready.go:94] pod "coredns-66bc5c9577-mlk9t" is "Ready"
	I1101 09:44:24.539036  415823 pod_ready.go:86] duration metric: took 37.005664079s for pod "coredns-66bc5c9577-mlk9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.542034  415823 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.546171  415823 pod_ready.go:94] pod "etcd-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:24.546202  415823 pod_ready.go:86] duration metric: took 4.14183ms for pod "etcd-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.548297  415823 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.552057  415823 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:24.552085  415823 pod_ready.go:86] duration metric: took 3.765443ms for pod "kube-apiserver-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.553877  415823 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.737469  415823 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:24.737500  415823 pod_ready.go:86] duration metric: took 183.602214ms for pod "kube-controller-manager-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.936875  415823 pod_ready.go:83] waiting for pod "kube-proxy-dszvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:25.337704  415823 pod_ready.go:94] pod "kube-proxy-dszvg" is "Ready"
	I1101 09:44:25.337740  415823 pod_ready.go:86] duration metric: took 400.799752ms for pod "kube-proxy-dszvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:25.537478  415823 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:22.720310  422921 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:44:22.724894  422921 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:44:22.724941  422921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:44:22.738693  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:44:22.952858  422921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:44:22.952950  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:22.952991  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-722387 minikube.k8s.io/updated_at=2025_11_01T09_44_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=newest-cni-722387 minikube.k8s.io/primary=true
	I1101 09:44:23.035461  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:23.035594  422921 ops.go:34] apiserver oom_adj: -16
	I1101 09:44:23.536240  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:24.035722  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:24.536107  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:25.035835  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:25.535832  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:25.937251  415823 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:25.937281  415823 pod_ready.go:86] duration metric: took 399.779095ms for pod "kube-scheduler-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:25.937293  415823 pod_ready.go:40] duration metric: took 38.410135058s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:44:25.982726  415823 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:44:25.985490  415823 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-927869" cluster and "default" namespace by default
	I1101 09:44:26.036048  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:26.536576  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:27.035529  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:27.106742  422921 kubeadm.go:1114] duration metric: took 4.153880162s to wait for elevateKubeSystemPrivileges
	I1101 09:44:27.106782  422921 kubeadm.go:403] duration metric: took 14.659875744s to StartCluster
	I1101 09:44:27.106806  422921 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:27.106895  422921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:27.108666  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:27.108895  422921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:44:27.108939  422921 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:44:27.109008  422921 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-722387"
	I1101 09:44:27.109025  422921 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-722387"
	I1101 09:44:27.108892  422921 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:44:27.109049  422921 addons.go:70] Setting default-storageclass=true in profile "newest-cni-722387"
	I1101 09:44:27.109061  422921 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:27.109082  422921 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-722387"
	I1101 09:44:27.109130  422921 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:27.109483  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:27.109621  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:27.111555  422921 out.go:179] * Verifying Kubernetes components...
	I1101 09:44:27.113019  422921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:27.134154  422921 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:44:27.135067  422921 addons.go:239] Setting addon default-storageclass=true in "newest-cni-722387"
	I1101 09:44:27.135105  422921 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:27.135433  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:27.135476  422921 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:44:27.135497  422921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:44:27.135550  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:27.165881  422921 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:44:27.165927  422921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:44:27.165992  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:27.167165  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:27.193516  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:27.208174  422921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:44:27.256046  422921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:44:27.286530  422921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:44:27.309898  422921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:44:27.399415  422921 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1101 09:44:27.401255  422921 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:44:27.401316  422921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:44:27.609302  422921 api_server.go:72] duration metric: took 500.236103ms to wait for apiserver process to appear ...
	I1101 09:44:27.609331  422921 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:44:27.609359  422921 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:27.614891  422921 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 09:44:27.615772  422921 api_server.go:141] control plane version: v1.34.1
	I1101 09:44:27.615796  422921 api_server.go:131] duration metric: took 6.458373ms to wait for apiserver health ...
	I1101 09:44:27.615804  422921 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:44:27.616498  422921 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:44:27.618332  422921 system_pods.go:59] 7 kube-system pods found
	I1101 09:44:27.618371  422921 system_pods.go:61] "etcd-newest-cni-722387" [db6d9615-3fd5-4642-abb7-9c060c90d98e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:44:27.618366  422921 addons.go:515] duration metric: took 509.425146ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:44:27.618380  422921 system_pods.go:61] "kindnet-vq8r5" [0e3ba1a9-d43e-4944-bd85-a7858465eeb5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:44:27.618390  422921 system_pods.go:61] "kube-apiserver-newest-cni-722387" [8e6d728a-c7de-4b60-8627-f4e2729f14b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:44:27.618398  422921 system_pods.go:61] "kube-controller-manager-newest-cni-722387" [a0094ce2-c3fe-4f6f-9f2b-7d9871577296] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:44:27.618404  422921 system_pods.go:61] "kube-proxy-rxnwv" [b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:44:27.618412  422921 system_pods.go:61] "kube-scheduler-newest-cni-722387" [8c1c8755-a1ca-4aa2-894c-b7ae1e5f1ab6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:44:27.618418  422921 system_pods.go:61] "storage-provisioner" [cca90c7a-0f05-4855-ba4d-530a67715840] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:44:27.618426  422921 system_pods.go:74] duration metric: took 2.615581ms to wait for pod list to return data ...
	I1101 09:44:27.618435  422921 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:44:27.620461  422921 default_sa.go:45] found service account: "default"
	I1101 09:44:27.620483  422921 default_sa.go:55] duration metric: took 2.03963ms for default service account to be created ...
	I1101 09:44:27.620500  422921 kubeadm.go:587] duration metric: took 511.436014ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:27.620522  422921 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:44:27.624060  422921 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:44:27.624099  422921 node_conditions.go:123] node cpu capacity is 8
	I1101 09:44:27.624117  422921 node_conditions.go:105] duration metric: took 3.590038ms to run NodePressure ...
	I1101 09:44:27.624134  422921 start.go:242] waiting for startup goroutines ...
	I1101 09:44:27.905064  422921 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-722387" context rescaled to 1 replicas
	I1101 09:44:27.905103  422921 start.go:247] waiting for cluster config update ...
	I1101 09:44:27.905115  422921 start.go:256] writing updated cluster config ...
	I1101 09:44:27.905522  422921 ssh_runner.go:195] Run: rm -f paused
	I1101 09:44:27.956603  422921 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:44:27.958676  422921 out.go:179] * Done! kubectl is now configured to use "newest-cni-722387" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.312170108Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.313144554Z" level=info msg="Ran pod sandbox dd0f5f650b134580df2633e1734bf0ff81130157208af5f78d26a671908c2d08 with infra container: kube-system/kindnet-vq8r5/POD" id=034f80ce-d9fd-446c-87b2-c6a35a6d4517 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.313750964Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-rxnwv/POD" id=1ae8afdb-d621-401a-9f7a-ef9d42cda022 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.313813868Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.314768809Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=bc941bb2-471b-474f-ac8f-5e1b6ed5e659 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.317186134Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=feff1f42-587f-4935-a405-a3e61ef2cb8d name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.317188644Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1ae8afdb-d621-401a-9f7a-ef9d42cda022 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.320389852Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.321416892Z" level=info msg="Ran pod sandbox 132e417aa15e93afb1469aa96a555933ba99cacd6b2ac17dc8b4031431bd97d9 with infra container: kube-system/kube-proxy-rxnwv/POD" id=1ae8afdb-d621-401a-9f7a-ef9d42cda022 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.321778393Z" level=info msg="Creating container: kube-system/kindnet-vq8r5/kindnet-cni" id=3bfbe10e-2977-4c2f-b5e0-62f117aa320a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.321887655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.32244162Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=43a88b2f-1d47-46fc-88c2-6b2508bf6db8 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.323316855Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=08cd1930-7237-495b-8098-e599e1231410 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.327202239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.327844075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.328756647Z" level=info msg="Creating container: kube-system/kube-proxy-rxnwv/kube-proxy" id=e4e6d617-86bb-4523-afc9-17c293213c25 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.328904699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.334183519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.334751049Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.355276136Z" level=info msg="Created container d61d3867c2c337bc64a8f1746cbe6105ab4eef13c88fb965d614dbcae93c39cf: kube-system/kindnet-vq8r5/kindnet-cni" id=3bfbe10e-2977-4c2f-b5e0-62f117aa320a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.356220009Z" level=info msg="Starting container: d61d3867c2c337bc64a8f1746cbe6105ab4eef13c88fb965d614dbcae93c39cf" id=2d18d635-c7e5-4f3a-af62-a6aa7df700d6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.359204521Z" level=info msg="Started container" PID=1648 containerID=d61d3867c2c337bc64a8f1746cbe6105ab4eef13c88fb965d614dbcae93c39cf description=kube-system/kindnet-vq8r5/kindnet-cni id=2d18d635-c7e5-4f3a-af62-a6aa7df700d6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd0f5f650b134580df2633e1734bf0ff81130157208af5f78d26a671908c2d08
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.359893489Z" level=info msg="Created container 5882733b9732fcf1dc7e194def90003331976b6c9a089915c43d46249c7af2ab: kube-system/kube-proxy-rxnwv/kube-proxy" id=e4e6d617-86bb-4523-afc9-17c293213c25 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.361257595Z" level=info msg="Starting container: 5882733b9732fcf1dc7e194def90003331976b6c9a089915c43d46249c7af2ab" id=a51ce71f-6e6f-4b93-9e12-bbe6e7fbd2b9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:44:28 newest-cni-722387 crio[783]: time="2025-11-01T09:44:28.365104694Z" level=info msg="Started container" PID=1649 containerID=5882733b9732fcf1dc7e194def90003331976b6c9a089915c43d46249c7af2ab description=kube-system/kube-proxy-rxnwv/kube-proxy id=a51ce71f-6e6f-4b93-9e12-bbe6e7fbd2b9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=132e417aa15e93afb1469aa96a555933ba99cacd6b2ac17dc8b4031431bd97d9
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5882733b9732f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   Less than a second ago   Running             kube-proxy                0                   132e417aa15e9       kube-proxy-rxnwv                            kube-system
	d61d3867c2c33       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   Less than a second ago   Running             kindnet-cni               0                   dd0f5f650b134       kindnet-vq8r5                               kube-system
	58a7c978ce63a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago           Running             kube-scheduler            0                   fa3789dcc7bfd       kube-scheduler-newest-cni-722387            kube-system
	b3dbb0ff38405       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago           Running             etcd                      0                   998bbd73a1b80       etcd-newest-cni-722387                      kube-system
	8698afb372dad       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago           Running             kube-apiserver            0                   410507f48d546       kube-apiserver-newest-cni-722387            kube-system
	ecd48a9ed0132       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago           Running             kube-controller-manager   0                   3bb19e88d7723       kube-controller-manager-newest-cni-722387   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-722387
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-722387
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=newest-cni-722387
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_44_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:44:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-722387
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:44:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:44:22 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:44:22 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:44:22 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 09:44:22 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-722387
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                ae9053f9-594c-4df9-adeb-a6fd802f163d
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-722387                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-vq8r5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-722387             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-722387    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-rxnwv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-722387             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 0s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node newest-cni-722387 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node newest-cni-722387 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node newest-cni-722387 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-722387 event: Registered Node newest-cni-722387 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [b3dbb0ff38405e8fad1e2c9984d21fa45270fa057d0219d632b3bf7e011dbe3c] <==
	{"level":"warn","ts":"2025-11-01T09:44:18.872446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.878815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.887152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.894474Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.901114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.907958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.914652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.927195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.933633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.941027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.951064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.958108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.970891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.978092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.984479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.991377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:18.998421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:19.004675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:19.011351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:19.018590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:19.024976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:19.047840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:19.054353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:19.061649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:19.115239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58986","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:44:29 up  1:26,  0 user,  load average: 9.47, 6.06, 3.54
	Linux newest-cni-722387 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d61d3867c2c337bc64a8f1746cbe6105ab4eef13c88fb965d614dbcae93c39cf] <==
	I1101 09:44:28.653514       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:44:28.653782       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:44:28.653951       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:44:28.653972       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:44:28.653996       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:44:28Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:44:28.855836       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:44:28.855947       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:44:28.855960       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:44:28.856124       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:44:29.199084       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:44:29.199145       1 metrics.go:72] Registering metrics
	I1101 09:44:29.200805       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [8698afb372dadd76b424d49c74ac4af7646b0c2782a0f5d1e164c06dc430a5a2] <==
	E1101 09:44:19.679839       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1101 09:44:19.713723       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1101 09:44:19.727450       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:44:19.731052       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:44:19.731688       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1101 09:44:19.740038       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:44:19.740550       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:44:19.917372       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:44:20.530511       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1101 09:44:20.534769       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1101 09:44:20.534793       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:44:21.045170       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:44:21.082781       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:44:21.133997       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1101 09:44:21.140675       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1101 09:44:21.141851       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:44:21.146606       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:44:21.570039       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:44:22.119522       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:44:22.129625       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1101 09:44:22.137595       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1101 09:44:27.226836       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:44:27.233665       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:44:27.373118       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1101 09:44:27.426277       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [ecd48a9ed0132f1471da76dcc8ada8719405ad38d8152a7a0e1c8ee44ce97800] <==
	I1101 09:44:26.543088       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:44:26.568348       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:44:26.568472       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:44:26.569642       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:44:26.569694       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:44:26.569708       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:44:26.569772       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:44:26.569777       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:44:26.569821       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:44:26.569833       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:44:26.569843       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:44:26.569854       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:44:26.569832       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:44:26.569844       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:44:26.569881       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:44:26.569977       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:44:26.570817       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:44:26.570931       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:44:26.570939       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:44:26.572280       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:44:26.573420       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:44:26.573459       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:44:26.575767       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:44:26.581023       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:44:26.589763       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5882733b9732fcf1dc7e194def90003331976b6c9a089915c43d46249c7af2ab] <==
	I1101 09:44:28.407028       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:44:28.469901       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:44:28.570848       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:44:28.571362       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1101 09:44:28.571538       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:44:28.595183       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:44:28.595256       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:44:28.601005       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:44:28.601552       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:44:28.601575       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:44:28.603235       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:44:28.603259       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:44:28.603284       1 config.go:200] "Starting service config controller"
	I1101 09:44:28.603290       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:44:28.603305       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:44:28.603311       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:44:28.603324       1 config.go:309] "Starting node config controller"
	I1101 09:44:28.603339       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:44:28.603351       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:44:28.704142       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:44:28.704183       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:44:28.704151       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [58a7c978ce63a85d6eaafa2bc2d5346e741f20308b55a35d6edc79f56ffb488a] <==
	I1101 09:44:19.835145       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:44:19.835421       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:44:19.835513       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1101 09:44:19.836839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1101 09:44:19.837809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1101 09:44:19.838071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1101 09:44:19.837837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1101 09:44:19.837942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1101 09:44:19.837973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1101 09:44:19.838145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1101 09:44:19.837843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1101 09:44:19.838251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1101 09:44:19.838335       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1101 09:44:19.838394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1101 09:44:19.838634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:44:19.838778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1101 09:44:19.838806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1101 09:44:19.838819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1101 09:44:19.838994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1101 09:44:19.839155       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1101 09:44:19.839332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1101 09:44:19.839381       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1101 09:44:20.753693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1101 09:44:20.787940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1101 09:44:21.235455       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:44:22 newest-cni-722387 kubelet[1332]: E1101 09:44:22.972137    1332 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-722387\" already exists" pod="kube-system/kube-controller-manager-newest-cni-722387"
	Nov 01 09:44:22 newest-cni-722387 kubelet[1332]: E1101 09:44:22.972150    1332 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-722387\" already exists" pod="kube-system/kube-apiserver-newest-cni-722387"
	Nov 01 09:44:22 newest-cni-722387 kubelet[1332]: E1101 09:44:22.972150    1332 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-722387\" already exists" pod="kube-system/kube-scheduler-newest-cni-722387"
	Nov 01 09:44:22 newest-cni-722387 kubelet[1332]: I1101 09:44:22.990606    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-722387" podStartSLOduration=0.990571486 podStartE2EDuration="990.571486ms" podCreationTimestamp="2025-11-01 09:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:44:22.990556338 +0000 UTC m=+1.129951155" watchObservedRunningTime="2025-11-01 09:44:22.990571486 +0000 UTC m=+1.129966303"
	Nov 01 09:44:23 newest-cni-722387 kubelet[1332]: I1101 09:44:23.012201    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-722387" podStartSLOduration=1.012179305 podStartE2EDuration="1.012179305s" podCreationTimestamp="2025-11-01 09:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:44:23.000935973 +0000 UTC m=+1.140330782" watchObservedRunningTime="2025-11-01 09:44:23.012179305 +0000 UTC m=+1.151574122"
	Nov 01 09:44:23 newest-cni-722387 kubelet[1332]: I1101 09:44:23.012340    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-722387" podStartSLOduration=1.012332525 podStartE2EDuration="1.012332525s" podCreationTimestamp="2025-11-01 09:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:44:23.011709276 +0000 UTC m=+1.151104092" watchObservedRunningTime="2025-11-01 09:44:23.012332525 +0000 UTC m=+1.151727346"
	Nov 01 09:44:23 newest-cni-722387 kubelet[1332]: I1101 09:44:23.022698    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-722387" podStartSLOduration=1.022675884 podStartE2EDuration="1.022675884s" podCreationTimestamp="2025-11-01 09:44:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:44:23.022621741 +0000 UTC m=+1.162016558" watchObservedRunningTime="2025-11-01 09:44:23.022675884 +0000 UTC m=+1.162070683"
	Nov 01 09:44:26 newest-cni-722387 kubelet[1332]: I1101 09:44:26.578765    1332 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 09:44:26 newest-cni-722387 kubelet[1332]: I1101 09:44:26.579594    1332 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: I1101 09:44:27.467578    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0-xtables-lock\") pod \"kube-proxy-rxnwv\" (UID: \"b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0\") " pod="kube-system/kube-proxy-rxnwv"
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: I1101 09:44:27.467670    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llk45\" (UniqueName: \"kubernetes.io/projected/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-kube-api-access-llk45\") pod \"kindnet-vq8r5\" (UID: \"0e3ba1a9-d43e-4944-bd85-a7858465eeb5\") " pod="kube-system/kindnet-vq8r5"
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: I1101 09:44:27.467704    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0-lib-modules\") pod \"kube-proxy-rxnwv\" (UID: \"b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0\") " pod="kube-system/kube-proxy-rxnwv"
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: I1101 09:44:27.467731    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-cni-cfg\") pod \"kindnet-vq8r5\" (UID: \"0e3ba1a9-d43e-4944-bd85-a7858465eeb5\") " pod="kube-system/kindnet-vq8r5"
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: I1101 09:44:27.467749    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-xtables-lock\") pod \"kindnet-vq8r5\" (UID: \"0e3ba1a9-d43e-4944-bd85-a7858465eeb5\") " pod="kube-system/kindnet-vq8r5"
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: I1101 09:44:27.467769    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-lib-modules\") pod \"kindnet-vq8r5\" (UID: \"0e3ba1a9-d43e-4944-bd85-a7858465eeb5\") " pod="kube-system/kindnet-vq8r5"
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: I1101 09:44:27.467792    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0-kube-proxy\") pod \"kube-proxy-rxnwv\" (UID: \"b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0\") " pod="kube-system/kube-proxy-rxnwv"
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: I1101 09:44:27.467875    1332 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd6dw\" (UniqueName: \"kubernetes.io/projected/b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0-kube-api-access-kd6dw\") pod \"kube-proxy-rxnwv\" (UID: \"b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0\") " pod="kube-system/kube-proxy-rxnwv"
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: E1101 09:44:27.576024    1332 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: E1101 09:44:27.576065    1332 projected.go:196] Error preparing data for projected volume kube-api-access-kd6dw for pod kube-system/kube-proxy-rxnwv: configmap "kube-root-ca.crt" not found
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: E1101 09:44:27.576156    1332 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0-kube-api-access-kd6dw podName:b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0 nodeName:}" failed. No retries permitted until 2025-11-01 09:44:28.076117172 +0000 UTC m=+6.215511983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-kd6dw" (UniqueName: "kubernetes.io/projected/b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0-kube-api-access-kd6dw") pod "kube-proxy-rxnwv" (UID: "b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0") : configmap "kube-root-ca.crt" not found
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: E1101 09:44:27.576450    1332 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: E1101 09:44:27.576476    1332 projected.go:196] Error preparing data for projected volume kube-api-access-llk45 for pod kube-system/kindnet-vq8r5: configmap "kube-root-ca.crt" not found
	Nov 01 09:44:27 newest-cni-722387 kubelet[1332]: E1101 09:44:27.576593    1332 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-kube-api-access-llk45 podName:0e3ba1a9-d43e-4944-bd85-a7858465eeb5 nodeName:}" failed. No retries permitted until 2025-11-01 09:44:28.076566283 +0000 UTC m=+6.215961095 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-llk45" (UniqueName: "kubernetes.io/projected/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-kube-api-access-llk45") pod "kindnet-vq8r5" (UID: "0e3ba1a9-d43e-4944-bd85-a7858465eeb5") : configmap "kube-root-ca.crt" not found
	Nov 01 09:44:29 newest-cni-722387 kubelet[1332]: I1101 09:44:29.003055    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vq8r5" podStartSLOduration=2.003035379 podStartE2EDuration="2.003035379s" podCreationTimestamp="2025-11-01 09:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:44:29.002805902 +0000 UTC m=+7.142200729" watchObservedRunningTime="2025-11-01 09:44:29.003035379 +0000 UTC m=+7.142430194"
	Nov 01 09:44:29 newest-cni-722387 kubelet[1332]: I1101 09:44:29.003178    1332 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rxnwv" podStartSLOduration=2.003170839 podStartE2EDuration="2.003170839s" podCreationTimestamp="2025-11-01 09:44:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-01 09:44:28.990885078 +0000 UTC m=+7.130279883" watchObservedRunningTime="2025-11-01 09:44:29.003170839 +0000 UTC m=+7.142565658"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-722387 -n newest-cni-722387
E1101 09:44:29.913773  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:29.920249  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:29.931714  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:29.953245  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:29.995223  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-722387 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1101 09:44:30.077500  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:280: non-running pods: coredns-66bc5c9577-sbh67 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-722387 describe pod coredns-66bc5c9577-sbh67 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-722387 describe pod coredns-66bc5c9577-sbh67 storage-provisioner: exit status 1 (59.218309ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-sbh67" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-722387 describe pod coredns-66bc5c9577-sbh67 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-214580 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-214580 --alsologtostderr -v=1: exit status 80 (1.616389998s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-214580 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:44:31.878883  428940 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:31.879140  428940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:31.879150  428940 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:31.879154  428940 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:31.879384  428940 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:31.879627  428940 out.go:368] Setting JSON to false
	I1101 09:44:31.879675  428940 mustload.go:66] Loading cluster: embed-certs-214580
	I1101 09:44:31.880071  428940 config.go:182] Loaded profile config "embed-certs-214580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:31.880452  428940 cli_runner.go:164] Run: docker container inspect embed-certs-214580 --format={{.State.Status}}
	I1101 09:44:31.898825  428940 host.go:66] Checking if "embed-certs-214580" exists ...
	I1101 09:44:31.899167  428940 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:31.959272  428940 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:84 SystemTime:2025-11-01 09:44:31.948202457 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:31.959876  428940 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-214580 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:44:31.962160  428940 out.go:179] * Pausing node embed-certs-214580 ... 
	I1101 09:44:31.963501  428940 host.go:66] Checking if "embed-certs-214580" exists ...
	I1101 09:44:31.963807  428940 ssh_runner.go:195] Run: systemctl --version
	I1101 09:44:31.963859  428940 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-214580
	I1101 09:44:31.982309  428940 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/embed-certs-214580/id_rsa Username:docker}
	I1101 09:44:32.082313  428940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:32.095527  428940 pause.go:52] kubelet running: true
	I1101 09:44:32.095595  428940 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:32.257926  428940 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:32.258035  428940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:32.327239  428940 cri.go:89] found id: "993f4e82116419f59854864bc1ee5f0cf6ba6320e0b5115d8a1cf328f72a9405"
	I1101 09:44:32.327269  428940 cri.go:89] found id: "94bfc341f880370946fcac7fd5ce45c7861054b53499632f386ed99e3432d6c2"
	I1101 09:44:32.327275  428940 cri.go:89] found id: "604c59ebde7d43eda75c4ad48146bec49639d4733d95f23dc69312c970a4a1bb"
	I1101 09:44:32.327279  428940 cri.go:89] found id: "4e54db4ff164762c55475c64b60cd58e8006a9d8724b2134ba5420988328409a"
	I1101 09:44:32.327281  428940 cri.go:89] found id: "4afe29f878054c6f745c8446b62728a0f47041b20a9aebe50516a89df2ce3ad4"
	I1101 09:44:32.327285  428940 cri.go:89] found id: "92f3e97dd2f0dfb87caf1169f059e045ee0bba63017d45c00279b75a85b35dd1"
	I1101 09:44:32.327287  428940 cri.go:89] found id: "900d5eaf90986af4e504a563b9e25cc937211d9280a58157d415269656f12fe8"
	I1101 09:44:32.327289  428940 cri.go:89] found id: "e96acc480b4e765646d24acecdd6b0e6543ce1a4ca7a4dfebb2ac4820f369fdc"
	I1101 09:44:32.327292  428940 cri.go:89] found id: "44596abc1851041c6cd33df427646452721a1d34c3147c32241a3f38e3af7c91"
	I1101 09:44:32.327299  428940 cri.go:89] found id: "d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e"
	I1101 09:44:32.327303  428940 cri.go:89] found id: "2c7e75150e82583057ddfb35cc9f50ac38e6bb51044ed6dc95dae3d75032542c"
	I1101 09:44:32.327307  428940 cri.go:89] found id: ""
	I1101 09:44:32.327365  428940 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:32.339687  428940 retry.go:31] will retry after 285.551507ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:32Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:32.626251  428940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:32.639667  428940 pause.go:52] kubelet running: false
	I1101 09:44:32.639741  428940 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:32.774814  428940 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:32.774958  428940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:32.845486  428940 cri.go:89] found id: "993f4e82116419f59854864bc1ee5f0cf6ba6320e0b5115d8a1cf328f72a9405"
	I1101 09:44:32.845518  428940 cri.go:89] found id: "94bfc341f880370946fcac7fd5ce45c7861054b53499632f386ed99e3432d6c2"
	I1101 09:44:32.845525  428940 cri.go:89] found id: "604c59ebde7d43eda75c4ad48146bec49639d4733d95f23dc69312c970a4a1bb"
	I1101 09:44:32.845530  428940 cri.go:89] found id: "4e54db4ff164762c55475c64b60cd58e8006a9d8724b2134ba5420988328409a"
	I1101 09:44:32.845534  428940 cri.go:89] found id: "4afe29f878054c6f745c8446b62728a0f47041b20a9aebe50516a89df2ce3ad4"
	I1101 09:44:32.845539  428940 cri.go:89] found id: "92f3e97dd2f0dfb87caf1169f059e045ee0bba63017d45c00279b75a85b35dd1"
	I1101 09:44:32.845544  428940 cri.go:89] found id: "900d5eaf90986af4e504a563b9e25cc937211d9280a58157d415269656f12fe8"
	I1101 09:44:32.845548  428940 cri.go:89] found id: "e96acc480b4e765646d24acecdd6b0e6543ce1a4ca7a4dfebb2ac4820f369fdc"
	I1101 09:44:32.845553  428940 cri.go:89] found id: "44596abc1851041c6cd33df427646452721a1d34c3147c32241a3f38e3af7c91"
	I1101 09:44:32.845569  428940 cri.go:89] found id: "d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e"
	I1101 09:44:32.845573  428940 cri.go:89] found id: "2c7e75150e82583057ddfb35cc9f50ac38e6bb51044ed6dc95dae3d75032542c"
	I1101 09:44:32.845577  428940 cri.go:89] found id: ""
	I1101 09:44:32.845616  428940 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:32.858130  428940 retry.go:31] will retry after 330.15401ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:32Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:33.188686  428940 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:33.204025  428940 pause.go:52] kubelet running: false
	I1101 09:44:33.204119  428940 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:33.342493  428940 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:33.342581  428940 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:33.411372  428940 cri.go:89] found id: "993f4e82116419f59854864bc1ee5f0cf6ba6320e0b5115d8a1cf328f72a9405"
	I1101 09:44:33.411400  428940 cri.go:89] found id: "94bfc341f880370946fcac7fd5ce45c7861054b53499632f386ed99e3432d6c2"
	I1101 09:44:33.411406  428940 cri.go:89] found id: "604c59ebde7d43eda75c4ad48146bec49639d4733d95f23dc69312c970a4a1bb"
	I1101 09:44:33.411411  428940 cri.go:89] found id: "4e54db4ff164762c55475c64b60cd58e8006a9d8724b2134ba5420988328409a"
	I1101 09:44:33.411415  428940 cri.go:89] found id: "4afe29f878054c6f745c8446b62728a0f47041b20a9aebe50516a89df2ce3ad4"
	I1101 09:44:33.411420  428940 cri.go:89] found id: "92f3e97dd2f0dfb87caf1169f059e045ee0bba63017d45c00279b75a85b35dd1"
	I1101 09:44:33.411425  428940 cri.go:89] found id: "900d5eaf90986af4e504a563b9e25cc937211d9280a58157d415269656f12fe8"
	I1101 09:44:33.411428  428940 cri.go:89] found id: "e96acc480b4e765646d24acecdd6b0e6543ce1a4ca7a4dfebb2ac4820f369fdc"
	I1101 09:44:33.411432  428940 cri.go:89] found id: "44596abc1851041c6cd33df427646452721a1d34c3147c32241a3f38e3af7c91"
	I1101 09:44:33.411448  428940 cri.go:89] found id: "d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e"
	I1101 09:44:33.411452  428940 cri.go:89] found id: "2c7e75150e82583057ddfb35cc9f50ac38e6bb51044ed6dc95dae3d75032542c"
	I1101 09:44:33.411457  428940 cri.go:89] found id: ""
	I1101 09:44:33.411504  428940 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:33.426546  428940 out.go:203] 
	W1101 09:44:33.428467  428940 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:44:33.428538  428940 out.go:285] * 
	* 
	W1101 09:44:33.432636  428940 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:44:33.434031  428940 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-214580 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-214580
helpers_test.go:243: (dbg) docker inspect embed-certs-214580:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0",
	        "Created": "2025-11-01T09:42:23.57612126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 415461,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:43:35.02236605Z",
	            "FinishedAt": "2025-11-01T09:43:34.058539964Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/hosts",
	        "LogPath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0-json.log",
	        "Name": "/embed-certs-214580",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-214580:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-214580",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0",
	                "LowerDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-214580",
	                "Source": "/var/lib/docker/volumes/embed-certs-214580/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-214580",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-214580",
	                "name.minikube.sigs.k8s.io": "embed-certs-214580",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6532393ab15dd79755c611e8ef4a73a6e779ce911719985dde87d9464bc34324",
	            "SandboxKey": "/var/run/docker/netns/6532393ab15d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-214580": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:82:ec:7e:49:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef396acdcfefe4b7ce9bad3abfa4446d31948191a9bcabcff15b305b8fa3a9ee",
	                    "EndpointID": "a5faafa98d2a3a3011004aff5d94302e99d44743d391dc56848df81dc09d3bbc",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-214580",
	                        "7217dfc1b74f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-214580 -n embed-certs-214580
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-214580 -n embed-certs-214580: exit status 2 (345.320389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-214580 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-214580 logs -n 25: (1.129897647s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p no-preload-224845 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ stop    │ -p embed-certs-214580 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ stop    │ -p default-k8s-diff-port-927869 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p no-preload-224845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-214580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ old-k8s-version-106430 image list --format=json                                                                                                                                                                                               │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ pause   │ -p old-k8s-version-106430 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ no-preload-224845 image list --format=json                                                                                                                                                                                                    │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p no-preload-224845 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-722387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ stop    │ -p newest-cni-722387 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ image   │ embed-certs-214580 image list --format=json                                                                                                                                                                                                   │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p embed-certs-214580 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:44:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:44:00.823522  422921 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:00.823684  422921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.823696  422921 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:00.823702  422921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.823906  422921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:00.824429  422921 out.go:368] Setting JSON to false
	I1101 09:44:00.825935  422921 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5179,"bootTime":1761985062,"procs":518,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:44:00.826062  422921 start.go:143] virtualization: kvm guest
	I1101 09:44:00.828080  422921 out.go:179] * [newest-cni-722387] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:44:00.829516  422921 notify.go:221] Checking for updates...
	I1101 09:44:00.829545  422921 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:44:00.831103  422921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:44:00.832421  422921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:00.833671  422921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:44:00.835236  422921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:44:00.836312  422921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:44:00.838662  422921 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.838859  422921 config.go:182] Loaded profile config "embed-certs-214580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.839032  422921 config.go:182] Loaded profile config "no-preload-224845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.839168  422921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:44:00.868651  422921 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:44:00.868776  422921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:00.932313  422921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:44:00.919582405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:00.932410  422921 docker.go:319] overlay module found
	I1101 09:44:00.934186  422921 out.go:179] * Using the docker driver based on user configuration
	I1101 09:44:00.935396  422921 start.go:309] selected driver: docker
	I1101 09:44:00.935426  422921 start.go:930] validating driver "docker" against <nil>
	I1101 09:44:00.935441  422921 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:44:00.936076  422921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:00.998903  422921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:44:00.988574943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:00.999261  422921 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 09:44:00.999309  422921 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 09:44:00.999988  422921 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:01.001892  422921 out.go:179] * Using Docker driver with root privileges
	I1101 09:44:01.003008  422921 cni.go:84] Creating CNI manager for ""
	I1101 09:44:01.003093  422921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:01.003109  422921 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:44:01.003194  422921 start.go:353] cluster config:
	{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:01.004455  422921 out.go:179] * Starting "newest-cni-722387" primary control-plane node in "newest-cni-722387" cluster
	I1101 09:44:01.005836  422921 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:44:01.007040  422921 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:44:01.008185  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:01.008213  422921 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:44:01.008239  422921 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:44:01.008255  422921 cache.go:59] Caching tarball of preloaded images
	I1101 09:44:01.008363  422921 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:44:01.008379  422921 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:44:01.008553  422921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:01.008588  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json: {Name:mk9b2e752fcdc3711c80d757637de7b71a85dab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:01.031509  422921 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:44:01.031532  422921 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:44:01.031549  422921 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:44:01.031586  422921 start.go:360] acquireMachinesLock for newest-cni-722387: {Name:mk940a2cf467ead4a4947b13278d9e50da243cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:44:01.031708  422921 start.go:364] duration metric: took 99.393µs to acquireMachinesLock for "newest-cni-722387"
	I1101 09:44:01.031740  422921 start.go:93] Provisioning new machine with config: &{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:44:01.031822  422921 start.go:125] createHost starting for "" (driver="docker")
	W1101 09:44:00.646338  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:02.647289  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:02.540153  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:04.540648  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:01.033898  422921 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:44:01.034155  422921 start.go:159] libmachine.API.Create for "newest-cni-722387" (driver="docker")
	I1101 09:44:01.034187  422921 client.go:173] LocalClient.Create starting
	I1101 09:44:01.034307  422921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem
	I1101 09:44:01.034359  422921 main.go:143] libmachine: Decoding PEM data...
	I1101 09:44:01.034377  422921 main.go:143] libmachine: Parsing certificate...
	I1101 09:44:01.034445  422921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem
	I1101 09:44:01.034476  422921 main.go:143] libmachine: Decoding PEM data...
	I1101 09:44:01.034491  422921 main.go:143] libmachine: Parsing certificate...
	I1101 09:44:01.034944  422921 cli_runner.go:164] Run: docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:44:01.054283  422921 cli_runner.go:211] docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:44:01.054353  422921 network_create.go:284] running [docker network inspect newest-cni-722387] to gather additional debugging logs...
	I1101 09:44:01.054368  422921 cli_runner.go:164] Run: docker network inspect newest-cni-722387
	W1101 09:44:01.073549  422921 cli_runner.go:211] docker network inspect newest-cni-722387 returned with exit code 1
	I1101 09:44:01.073579  422921 network_create.go:287] error running [docker network inspect newest-cni-722387]: docker network inspect newest-cni-722387: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-722387 not found
	I1101 09:44:01.073594  422921 network_create.go:289] output of [docker network inspect newest-cni-722387]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-722387 not found
	
	** /stderr **
	I1101 09:44:01.073692  422921 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:44:01.093393  422921 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d29bf8504a2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:cd:69:fb:c0:b7} reservation:<nil>}
	I1101 09:44:01.094218  422921 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a4cb229b081d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:6d:0e:f5:7f:54} reservation:<nil>}
	I1101 09:44:01.095202  422921 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-859d00dbc8b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:da:ec:9f:a9:b4} reservation:<nil>}
	I1101 09:44:01.095784  422921 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5df57938ba0e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:1b:ab:95:75:01} reservation:<nil>}
	I1101 09:44:01.096312  422921 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-fd9ea47f5997 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:7a:e6:71:2c:14:ef} reservation:<nil>}
	I1101 09:44:01.096837  422921 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ef396acdcfef IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:66:2e:03:68:3f:bb} reservation:<nil>}
	I1101 09:44:01.097629  422921 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1da70}
	I1101 09:44:01.097655  422921 network_create.go:124] attempt to create docker network newest-cni-722387 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1101 09:44:01.097704  422921 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-722387 newest-cni-722387
	I1101 09:44:01.177766  422921 network_create.go:108] docker network newest-cni-722387 192.168.103.0/24 created
	I1101 09:44:01.177827  422921 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-722387" container
	I1101 09:44:01.177901  422921 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:44:01.199194  422921 cli_runner.go:164] Run: docker volume create newest-cni-722387 --label name.minikube.sigs.k8s.io=newest-cni-722387 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:44:01.221436  422921 oci.go:103] Successfully created a docker volume newest-cni-722387
	I1101 09:44:01.221600  422921 cli_runner.go:164] Run: docker run --rm --name newest-cni-722387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-722387 --entrypoint /usr/bin/test -v newest-cni-722387:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:44:01.677464  422921 oci.go:107] Successfully prepared a docker volume newest-cni-722387
	I1101 09:44:01.677514  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:01.677544  422921 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:44:01.677623  422921 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-722387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 09:44:05.146749  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:07.148682  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:09.647053  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:07.041096  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:09.539685  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:06.398607  422921 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-722387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.720922389s)
	I1101 09:44:06.398651  422921 kic.go:203] duration metric: took 4.721100224s to extract preloaded images to volume ...
	W1101 09:44:06.398758  422921 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:44:06.398800  422921 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:44:06.398852  422921 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:44:06.465541  422921 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-722387 --name newest-cni-722387 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-722387 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-722387 --network newest-cni-722387 --ip 192.168.103.2 --volume newest-cni-722387:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:44:06.805538  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Running}}
	I1101 09:44:06.828089  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:06.851749  422921 cli_runner.go:164] Run: docker exec newest-cni-722387 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:44:06.904120  422921 oci.go:144] the created container "newest-cni-722387" has a running status.
	I1101 09:44:06.904157  422921 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa...
	I1101 09:44:07.001848  422921 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:44:07.037979  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:07.065271  422921 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:44:07.065301  422921 kic_runner.go:114] Args: [docker exec --privileged newest-cni-722387 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:44:07.117749  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:07.141576  422921 machine.go:94] provisionDockerMachine start ...
	I1101 09:44:07.141692  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:07.170345  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:07.170754  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:07.171294  422921 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:44:07.329839  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-722387
	
	I1101 09:44:07.329873  422921 ubuntu.go:182] provisioning hostname "newest-cni-722387"
	I1101 09:44:07.329971  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:07.351602  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:07.351850  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:07.351866  422921 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-722387 && echo "newest-cni-722387" | sudo tee /etc/hostname
	I1101 09:44:07.513163  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-722387
	
	I1101 09:44:07.513257  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:07.536121  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:07.536418  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:07.536455  422921 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-722387' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-722387/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-722387' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:44:07.690179  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:44:07.690213  422921 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:44:07.690235  422921 ubuntu.go:190] setting up certificates
	I1101 09:44:07.690247  422921 provision.go:84] configureAuth start
	I1101 09:44:07.690303  422921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:07.710388  422921 provision.go:143] copyHostCerts
	I1101 09:44:07.710461  422921 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:44:07.710477  422921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:44:07.710559  422921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:44:07.710683  422921 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:44:07.710693  422921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:44:07.710734  422921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:44:07.710817  422921 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:44:07.710827  422921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:44:07.710863  422921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:44:07.710954  422921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.newest-cni-722387 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-722387]
	I1101 09:44:08.842065  422921 provision.go:177] copyRemoteCerts
	I1101 09:44:08.842134  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:44:08.842180  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:08.862777  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:08.967012  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:44:08.987471  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:44:09.005392  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:44:09.024170  422921 provision.go:87] duration metric: took 1.333906879s to configureAuth
	I1101 09:44:09.024208  422921 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:44:09.024391  422921 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:09.024511  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.046693  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:09.046953  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:09.046976  422921 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:44:09.318902  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:44:09.318969  422921 machine.go:97] duration metric: took 2.177370299s to provisionDockerMachine
	I1101 09:44:09.318981  422921 client.go:176] duration metric: took 8.284787176s to LocalClient.Create
	I1101 09:44:09.319007  422921 start.go:167] duration metric: took 8.284854636s to libmachine.API.Create "newest-cni-722387"
	I1101 09:44:09.319021  422921 start.go:293] postStartSetup for "newest-cni-722387" (driver="docker")
	I1101 09:44:09.319035  422921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:44:09.319106  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:44:09.319169  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.339325  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.443792  422921 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:44:09.447954  422921 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:44:09.447981  422921 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:44:09.448002  422921 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:44:09.448066  422921 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:44:09.448161  422921 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:44:09.448269  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:44:09.457217  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:44:09.478393  422921 start.go:296] duration metric: took 159.356449ms for postStartSetup
	I1101 09:44:09.478781  422921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:09.497615  422921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:09.497880  422921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:44:09.497971  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.516558  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.616534  422921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:44:09.621184  422921 start.go:128] duration metric: took 8.589348109s to createHost
	I1101 09:44:09.621206  422921 start.go:83] releasing machines lock for "newest-cni-722387", held for 8.589483705s
	I1101 09:44:09.621261  422921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:09.639137  422921 ssh_runner.go:195] Run: cat /version.json
	I1101 09:44:09.639152  422921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:44:09.639193  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.639227  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.659576  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.660064  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.813835  422921 ssh_runner.go:195] Run: systemctl --version
	I1101 09:44:09.820702  422921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:44:09.859165  422921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:44:09.863899  422921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:44:09.863990  422921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:44:09.891587  422921 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:44:09.891611  422921 start.go:496] detecting cgroup driver to use...
	I1101 09:44:09.891642  422921 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:44:09.891685  422921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:44:09.908170  422921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:44:09.920701  422921 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:44:09.920762  422921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:44:09.939277  422921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:44:09.958203  422921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:44:10.041329  422921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:44:10.138596  422921 docker.go:234] disabling docker service ...
	I1101 09:44:10.138674  422921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:44:10.162388  422921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:44:10.183310  422921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:44:10.277717  422921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:44:10.364259  422921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:44:10.377455  422921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:44:10.392986  422921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:44:10.393061  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.404147  422921 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:44:10.404225  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.414290  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.424717  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.434248  422921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:44:10.444846  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.466459  422921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.491214  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.504176  422921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:44:10.514111  422921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:44:10.522737  422921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:10.603998  422921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:44:10.745956  422921 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:44:10.746039  422921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:44:10.750485  422921 start.go:564] Will wait 60s for crictl version
	I1101 09:44:10.750549  422921 ssh_runner.go:195] Run: which crictl
	I1101 09:44:10.754696  422921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:44:10.782770  422921 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:44:10.782858  422921 ssh_runner.go:195] Run: crio --version
	I1101 09:44:10.814129  422921 ssh_runner.go:195] Run: crio --version
	I1101 09:44:10.842831  422921 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:44:10.845742  422921 cli_runner.go:164] Run: docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:44:10.865977  422921 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 09:44:10.870737  422921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:44:10.886253  422921 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 09:44:10.888153  422921 kubeadm.go:884] updating cluster {Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:44:10.888347  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:10.888429  422921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:44:10.923317  422921 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:44:10.923339  422921 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:44:10.923383  422921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:44:10.954698  422921 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:44:10.954725  422921 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:44:10.954734  422921 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1101 09:44:10.954838  422921 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-722387 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:44:10.954936  422921 ssh_runner.go:195] Run: crio config
	I1101 09:44:11.004449  422921 cni.go:84] Creating CNI manager for ""
	I1101 09:44:11.004473  422921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:11.004494  422921 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 09:44:11.004527  422921 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-722387 NodeName:newest-cni-722387 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:44:11.004682  422921 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-722387"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:44:11.004760  422921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:44:11.014530  422921 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:44:11.014605  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:44:11.024541  422921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 09:44:11.040294  422921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:44:11.062728  422921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 09:44:11.077519  422921 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:44:11.081688  422921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:44:11.094974  422921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:11.196048  422921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:44:11.218864  422921 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387 for IP: 192.168.103.2
	I1101 09:44:11.218886  422921 certs.go:195] generating shared ca certs ...
	I1101 09:44:11.218905  422921 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.219079  422921 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:44:11.219129  422921 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:44:11.219137  422921 certs.go:257] generating profile certs ...
	I1101 09:44:11.219206  422921 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.key
	I1101 09:44:11.219226  422921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.crt with IP's: []
	I1101 09:44:11.461428  422921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.crt ...
	I1101 09:44:11.461455  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.crt: {Name:mka26fe91724530410954f0cb0f760186d382fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.461645  422921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.key ...
	I1101 09:44:11.461660  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.key: {Name:mkfd8769aff14fe4cbc98be403d7018408109ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.462191  422921 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae
	I1101 09:44:11.462211  422921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1101 09:44:11.868383  422921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae ...
	I1101 09:44:11.868418  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae: {Name:mk4f413fba17a26ebf9c87bc9593ce90dfb89ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.868634  422921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae ...
	I1101 09:44:11.868654  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae: {Name:mk80e9deceb79e9196c5e16230d90849359b0914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.868766  422921 certs.go:382] copying /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae -> /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt
	I1101 09:44:11.868896  422921 certs.go:386] copying /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae -> /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key
	I1101 09:44:11.869007  422921 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key
	I1101 09:44:11.869030  422921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt with IP's: []
	I1101 09:44:12.000122  422921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt ...
	I1101 09:44:12.000158  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt: {Name:mk2ad63222a5177d2492cb7d1ba84a51f7e11b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:12.000356  422921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key ...
	I1101 09:44:12.000380  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key: {Name:mk162ecceb80bafb66ce5e25b61bc5c04bab15ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:12.000606  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:44:12.000657  422921 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:44:12.000669  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:44:12.000700  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:44:12.000738  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:44:12.000771  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:44:12.000830  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:44:12.001938  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:44:12.022650  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:44:12.042868  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:44:12.061963  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:44:12.080887  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:44:12.100615  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:44:12.120330  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:44:12.141267  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:44:12.163343  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:44:12.186827  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:44:12.208773  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:44:12.230567  422921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:44:12.246541  422921 ssh_runner.go:195] Run: openssl version
	I1101 09:44:12.253877  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:44:12.264673  422921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:12.270201  422921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:12.270261  422921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:12.306242  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:44:12.315679  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:44:12.325013  422921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:44:12.329129  422921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:44:12.329188  422921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:44:12.371362  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:44:12.380856  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:44:12.390756  422921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:44:12.394977  422921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:44:12.395045  422921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:44:12.432336  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:44:12.442783  422921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:44:12.446834  422921 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:44:12.446922  422921 kubeadm.go:401] StartCluster: {Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:12.447025  422921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:44:12.447084  422921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:44:12.476859  422921 cri.go:89] found id: ""
	I1101 09:44:12.476960  422921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:44:12.485947  422921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:44:12.495118  422921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:44:12.495191  422921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:44:12.503837  422921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:44:12.503858  422921 kubeadm.go:158] found existing configuration files:
	
	I1101 09:44:12.503904  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:44:12.513028  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:44:12.513123  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:44:12.521312  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:44:12.529973  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:44:12.530061  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:44:12.539012  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:44:12.547839  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:44:12.547902  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:44:12.555437  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:44:12.563284  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:44:12.563337  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:44:12.571308  422921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:44:12.610706  422921 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:44:12.610775  422921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:44:12.633149  422921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:44:12.633212  422921 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:44:12.633239  422921 kubeadm.go:319] OS: Linux
	I1101 09:44:12.633299  422921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:44:12.633366  422921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:44:12.633462  422921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:44:12.633544  422921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:44:12.633643  422921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:44:12.633730  422921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:44:12.633795  422921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:44:12.633869  422921 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:44:12.697497  422921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:44:12.697699  422921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:44:12.697851  422921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:44:12.706690  422921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1101 09:44:11.647095  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:14.146846  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:12.040114  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:14.539447  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:12.709509  422921 out.go:252]   - Generating certificates and keys ...
	I1101 09:44:12.709606  422921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:44:12.709733  422921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:44:12.854194  422921 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:44:13.380816  422921 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:44:13.429834  422921 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:44:13.579950  422921 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:44:13.897795  422921 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:44:13.898003  422921 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-722387] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:44:14.063204  422921 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:44:14.063358  422921 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-722387] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:44:14.595857  422921 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:44:14.864904  422921 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:44:14.974817  422921 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:44:14.974965  422921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:44:15.411035  422921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:44:15.738220  422921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:44:16.163033  422921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:44:16.379713  422921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:44:16.656283  422921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:44:16.656703  422921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:44:16.660761  422921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1101 09:44:16.646757  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	I1101 09:44:18.647241  415212 pod_ready.go:94] pod "coredns-66bc5c9577-cmnj8" is "Ready"
	I1101 09:44:18.647288  415212 pod_ready.go:86] duration metric: took 32.006837487s for pod "coredns-66bc5c9577-cmnj8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.650658  415212 pod_ready.go:83] waiting for pod "etcd-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.655619  415212 pod_ready.go:94] pod "etcd-embed-certs-214580" is "Ready"
	I1101 09:44:18.655650  415212 pod_ready.go:86] duration metric: took 4.963735ms for pod "etcd-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.658523  415212 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.664301  415212 pod_ready.go:94] pod "kube-apiserver-embed-certs-214580" is "Ready"
	I1101 09:44:18.664329  415212 pod_ready.go:86] duration metric: took 5.774053ms for pod "kube-apiserver-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.666532  415212 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.845825  415212 pod_ready.go:94] pod "kube-controller-manager-embed-certs-214580" is "Ready"
	I1101 09:44:18.845858  415212 pod_ready.go:86] duration metric: took 179.302458ms for pod "kube-controller-manager-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:19.045094  415212 pod_ready.go:83] waiting for pod "kube-proxy-49j45" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:19.445427  415212 pod_ready.go:94] pod "kube-proxy-49j45" is "Ready"
	I1101 09:44:19.445528  415212 pod_ready.go:86] duration metric: took 400.403346ms for pod "kube-proxy-49j45" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:19.645441  415212 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:20.044873  415212 pod_ready.go:94] pod "kube-scheduler-embed-certs-214580" is "Ready"
	I1101 09:44:20.044906  415212 pod_ready.go:86] duration metric: took 399.441ms for pod "kube-scheduler-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:20.044958  415212 pod_ready.go:40] duration metric: took 33.416946486s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:44:20.092487  415212 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:44:20.094263  415212 out.go:179] * Done! kubectl is now configured to use "embed-certs-214580" cluster and "default" namespace by default
	W1101 09:44:17.039371  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:19.039784  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:16.662416  422921 out.go:252]   - Booting up control plane ...
	I1101 09:44:16.662552  422921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:44:16.662673  422921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:44:16.663362  422921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:44:16.678425  422921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:44:16.678561  422921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:44:16.687747  422921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:44:16.688059  422921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:44:16.688132  422921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:44:16.797757  422921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:44:16.797944  422921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:44:17.299548  422921 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.906008ms
	I1101 09:44:17.303198  422921 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:44:17.303364  422921 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1101 09:44:17.303521  422921 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:44:17.303650  422921 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:44:18.761873  422921 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.458638771s
	I1101 09:44:19.840176  422921 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.53698671s
	I1101 09:44:21.304575  422921 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001294509s
	I1101 09:44:21.315500  422921 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:44:21.326477  422921 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:44:21.336740  422921 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:44:21.337047  422921 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-722387 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:44:21.345651  422921 kubeadm.go:319] [bootstrap-token] Using token: hcqanb.hb6jvis691nmk76a
	I1101 09:44:21.347127  422921 out.go:252]   - Configuring RBAC rules ...
	I1101 09:44:21.347291  422921 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:44:21.350981  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:44:21.357244  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:44:21.360342  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:44:21.364309  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:44:21.367361  422921 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:44:21.712031  422921 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:44:22.130627  422921 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:44:22.711154  422921 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:44:22.712086  422921 kubeadm.go:319] 
	I1101 09:44:22.712150  422921 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:44:22.712177  422921 kubeadm.go:319] 
	I1101 09:44:22.712290  422921 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:44:22.712300  422921 kubeadm.go:319] 
	I1101 09:44:22.712336  422921 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:44:22.712412  422921 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:44:22.712476  422921 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:44:22.712501  422921 kubeadm.go:319] 
	I1101 09:44:22.712589  422921 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:44:22.712599  422921 kubeadm.go:319] 
	I1101 09:44:22.712661  422921 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:44:22.712670  422921 kubeadm.go:319] 
	I1101 09:44:22.712714  422921 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:44:22.712814  422921 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:44:22.712954  422921 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:44:22.712966  422921 kubeadm.go:319] 
	I1101 09:44:22.713076  422921 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:44:22.713147  422921 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:44:22.713153  422921 kubeadm.go:319] 
	I1101 09:44:22.713233  422921 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hcqanb.hb6jvis691nmk76a \
	I1101 09:44:22.713362  422921 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 \
	I1101 09:44:22.713400  422921 kubeadm.go:319] 	--control-plane 
	I1101 09:44:22.713409  422921 kubeadm.go:319] 
	I1101 09:44:22.713509  422921 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:44:22.713529  422921 kubeadm.go:319] 
	I1101 09:44:22.713633  422921 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hcqanb.hb6jvis691nmk76a \
	I1101 09:44:22.713766  422921 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 
	I1101 09:44:22.716577  422921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:44:22.716763  422921 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:44:22.716802  422921 cni.go:84] Creating CNI manager for ""
	I1101 09:44:22.716816  422921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:22.719006  422921 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 09:44:21.539320  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:23.539726  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:24.539003  415823 pod_ready.go:94] pod "coredns-66bc5c9577-mlk9t" is "Ready"
	I1101 09:44:24.539036  415823 pod_ready.go:86] duration metric: took 37.005664079s for pod "coredns-66bc5c9577-mlk9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.542034  415823 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.546171  415823 pod_ready.go:94] pod "etcd-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:24.546202  415823 pod_ready.go:86] duration metric: took 4.14183ms for pod "etcd-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.548297  415823 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.552057  415823 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:24.552085  415823 pod_ready.go:86] duration metric: took 3.765443ms for pod "kube-apiserver-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.553877  415823 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.737469  415823 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:24.737500  415823 pod_ready.go:86] duration metric: took 183.602214ms for pod "kube-controller-manager-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.936875  415823 pod_ready.go:83] waiting for pod "kube-proxy-dszvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:25.337704  415823 pod_ready.go:94] pod "kube-proxy-dszvg" is "Ready"
	I1101 09:44:25.337740  415823 pod_ready.go:86] duration metric: took 400.799752ms for pod "kube-proxy-dszvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:25.537478  415823 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:22.720310  422921 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:44:22.724894  422921 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:44:22.724941  422921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:44:22.738693  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:44:22.952858  422921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:44:22.952950  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:22.952991  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-722387 minikube.k8s.io/updated_at=2025_11_01T09_44_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=newest-cni-722387 minikube.k8s.io/primary=true
	I1101 09:44:23.035461  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:23.035594  422921 ops.go:34] apiserver oom_adj: -16
	I1101 09:44:23.536240  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:24.035722  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:24.536107  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:25.035835  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:25.535832  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:25.937251  415823 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:25.937281  415823 pod_ready.go:86] duration metric: took 399.779095ms for pod "kube-scheduler-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:25.937293  415823 pod_ready.go:40] duration metric: took 38.410135058s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:44:25.982726  415823 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:44:25.985490  415823 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-927869" cluster and "default" namespace by default
	I1101 09:44:26.036048  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:26.536576  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:27.035529  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:27.106742  422921 kubeadm.go:1114] duration metric: took 4.153880162s to wait for elevateKubeSystemPrivileges
	I1101 09:44:27.106782  422921 kubeadm.go:403] duration metric: took 14.659875744s to StartCluster
	I1101 09:44:27.106806  422921 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:27.106895  422921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:27.108666  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:27.108895  422921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:44:27.108939  422921 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:44:27.109008  422921 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-722387"
	I1101 09:44:27.109025  422921 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-722387"
	I1101 09:44:27.108892  422921 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:44:27.109049  422921 addons.go:70] Setting default-storageclass=true in profile "newest-cni-722387"
	I1101 09:44:27.109061  422921 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:27.109082  422921 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-722387"
	I1101 09:44:27.109130  422921 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:27.109483  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:27.109621  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:27.111555  422921 out.go:179] * Verifying Kubernetes components...
	I1101 09:44:27.113019  422921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:27.134154  422921 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:44:27.135067  422921 addons.go:239] Setting addon default-storageclass=true in "newest-cni-722387"
	I1101 09:44:27.135105  422921 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:27.135433  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:27.135476  422921 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:44:27.135497  422921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:44:27.135550  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:27.165881  422921 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:44:27.165927  422921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:44:27.165992  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:27.167165  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:27.193516  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:27.208174  422921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:44:27.256046  422921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:44:27.286530  422921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:44:27.309898  422921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:44:27.399415  422921 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1101 09:44:27.401255  422921 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:44:27.401316  422921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:44:27.609302  422921 api_server.go:72] duration metric: took 500.236103ms to wait for apiserver process to appear ...
	I1101 09:44:27.609331  422921 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:44:27.609359  422921 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:27.614891  422921 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 09:44:27.615772  422921 api_server.go:141] control plane version: v1.34.1
	I1101 09:44:27.615796  422921 api_server.go:131] duration metric: took 6.458373ms to wait for apiserver health ...
	I1101 09:44:27.615804  422921 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:44:27.616498  422921 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:44:27.618332  422921 system_pods.go:59] 7 kube-system pods found
	I1101 09:44:27.618371  422921 system_pods.go:61] "etcd-newest-cni-722387" [db6d9615-3fd5-4642-abb7-9c060c90d98e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:44:27.618366  422921 addons.go:515] duration metric: took 509.425146ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:44:27.618380  422921 system_pods.go:61] "kindnet-vq8r5" [0e3ba1a9-d43e-4944-bd85-a7858465eeb5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:44:27.618390  422921 system_pods.go:61] "kube-apiserver-newest-cni-722387" [8e6d728a-c7de-4b60-8627-f4e2729f14b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:44:27.618398  422921 system_pods.go:61] "kube-controller-manager-newest-cni-722387" [a0094ce2-c3fe-4f6f-9f2b-7d9871577296] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:44:27.618404  422921 system_pods.go:61] "kube-proxy-rxnwv" [b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:44:27.618412  422921 system_pods.go:61] "kube-scheduler-newest-cni-722387" [8c1c8755-a1ca-4aa2-894c-b7ae1e5f1ab6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:44:27.618418  422921 system_pods.go:61] "storage-provisioner" [cca90c7a-0f05-4855-ba4d-530a67715840] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:44:27.618426  422921 system_pods.go:74] duration metric: took 2.615581ms to wait for pod list to return data ...
	I1101 09:44:27.618435  422921 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:44:27.620461  422921 default_sa.go:45] found service account: "default"
	I1101 09:44:27.620483  422921 default_sa.go:55] duration metric: took 2.03963ms for default service account to be created ...
	I1101 09:44:27.620500  422921 kubeadm.go:587] duration metric: took 511.436014ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:27.620522  422921 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:44:27.624060  422921 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:44:27.624099  422921 node_conditions.go:123] node cpu capacity is 8
	I1101 09:44:27.624117  422921 node_conditions.go:105] duration metric: took 3.590038ms to run NodePressure ...
	I1101 09:44:27.624134  422921 start.go:242] waiting for startup goroutines ...
	I1101 09:44:27.905064  422921 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-722387" context rescaled to 1 replicas
	I1101 09:44:27.905103  422921 start.go:247] waiting for cluster config update ...
	I1101 09:44:27.905115  422921 start.go:256] writing updated cluster config ...
	I1101 09:44:27.905522  422921 ssh_runner.go:195] Run: rm -f paused
	I1101 09:44:27.956603  422921 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:44:27.958676  422921 out.go:179] * Done! kubectl is now configured to use "newest-cni-722387" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:43:57 embed-certs-214580 crio[561]: time="2025-11-01T09:43:57.665027654Z" level=info msg="Created container a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=b1c91bf5-8f01-4ab2-ab81-cc7d3c3c1156 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:57 embed-certs-214580 crio[561]: time="2025-11-01T09:43:57.665896795Z" level=info msg="Starting container: a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc" id=6eed1cfb-8010-4e7a-8140-c4feaaebf87a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:57 embed-certs-214580 crio[561]: time="2025-11-01T09:43:57.668037389Z" level=info msg="Started container" PID=1732 containerID=a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper id=6eed1cfb-8010-4e7a-8140-c4feaaebf87a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a36b64f22c4e929c9972fdb657313aeae65ba1939b14851263c22f5754be603
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.291827735Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=761432ea-fde2-4146-8368-8fd288480602 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.302750099Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4ecfb0b8-3284-4943-8069-2f1c4cc1491c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.305817519Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=ede2478a-a151-4918-ba49-6bf0dafffb0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.305960146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.371796735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.372542375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.615333933Z" level=info msg="Created container 8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=ede2478a-a151-4918-ba49-6bf0dafffb0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.616664658Z" level=info msg="Starting container: 8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b" id=6ad10c82-7ba3-4877-8e47-a0bcc7a964a8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.619266706Z" level=info msg="Started container" PID=1741 containerID=8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper id=6ad10c82-7ba3-4877-8e47-a0bcc7a964a8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a36b64f22c4e929c9972fdb657313aeae65ba1939b14851263c22f5754be603
	Nov 01 09:43:59 embed-certs-214580 crio[561]: time="2025-11-01T09:43:59.298349359Z" level=info msg="Removing container: a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc" id=7851930b-d862-46ed-a6f6-c714dce36133 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:59 embed-certs-214580 crio[561]: time="2025-11-01T09:43:59.52498389Z" level=info msg="Removed container a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=7851930b-d862-46ed-a6f6-c714dce36133 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.206624356Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=23cc060f-c284-4d8c-9809-74b7c7162b72 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.207480809Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7fb24d1c-ce77-474f-b2d9-5b4f5f2bd50e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.208583803Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=76efb056-84a4-474a-b8e9-ac52a0bf2a94 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.208783474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.215392811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.21619355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.243688716Z" level=info msg="Created container d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=76efb056-84a4-474a-b8e9-ac52a0bf2a94 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.244567927Z" level=info msg="Starting container: d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e" id=7921f00e-371f-4e37-8920-631c91a65ada name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.246788884Z" level=info msg="Started container" PID=1757 containerID=d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper id=7921f00e-371f-4e37-8920-631c91a65ada name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a36b64f22c4e929c9972fdb657313aeae65ba1939b14851263c22f5754be603
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.337284369Z" level=info msg="Removing container: 8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b" id=46b1e771-bfb8-4d1a-a602-f151d028f12c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.348459966Z" level=info msg="Removed container 8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=46b1e771-bfb8-4d1a-a602-f151d028f12c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d03cec41bb10f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   3a36b64f22c4e       dashboard-metrics-scraper-6ffb444bf9-2vxx5   kubernetes-dashboard
	2c7e75150e825       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   1bfb7f94d5940       kubernetes-dashboard-855c9754f9-pcx7c        kubernetes-dashboard
	993f4e8211641       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Running             storage-provisioner         1                   851b671a1407e       storage-provisioner                          kube-system
	a00a0012e7f0b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   a3d2d38ff20d8       busybox                                      default
	94bfc341f8803       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           47 seconds ago      Running             coredns                     0                   f43044105fbe4       coredns-66bc5c9577-cmnj8                     kube-system
	604c59ebde7d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   851b671a1407e       storage-provisioner                          kube-system
	4e54db4ff1647       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   579c33216b458       kindnet-v28lz                                kube-system
	4afe29f878054       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           47 seconds ago      Running             kube-proxy                  0                   736f4ca58df0a       kube-proxy-49j45                             kube-system
	92f3e97dd2f0d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   d002132176b94       etcd-embed-certs-214580                      kube-system
	900d5eaf90986       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   14039442d974c       kube-controller-manager-embed-certs-214580   kube-system
	e96acc480b4e7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   0d61740d15c5c       kube-apiserver-embed-certs-214580            kube-system
	44596abc18510       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   4ce64542c59d1       kube-scheduler-embed-certs-214580            kube-system
	
	
	==> coredns [94bfc341f880370946fcac7fd5ce45c7861054b53499632f386ed99e3432d6c2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33398 - 39048 "HINFO IN 4650601798089357910.1429863867643008636. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.057085215s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-214580
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-214580
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=embed-certs-214580
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_42_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:42:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-214580
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:44:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:43:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-214580
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d2ac0cbf-eedb-40ea-a447-534bb7a6586c
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-cmnj8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-embed-certs-214580                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-v28lz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-embed-certs-214580             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-214580    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-49j45                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-embed-certs-214580             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2vxx5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pcx7c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node embed-certs-214580 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node embed-certs-214580 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node embed-certs-214580 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node embed-certs-214580 event: Registered Node embed-certs-214580 in Controller
	  Normal  NodeReady                93s                kubelet          Node embed-certs-214580 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node embed-certs-214580 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node embed-certs-214580 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node embed-certs-214580 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node embed-certs-214580 event: Registered Node embed-certs-214580 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [92f3e97dd2f0dfb87caf1169f059e045ee0bba63017d45c00279b75a85b35dd1] <==
	{"level":"warn","ts":"2025-11-01T09:43:44.347584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.364239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.374741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.390483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.420577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.433065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.456404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.463238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.529996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47776","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:43:58.560791Z","caller":"traceutil/trace.go:172","msg":"trace[1216427449] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"186.558316ms","start":"2025-11-01T09:43:58.374215Z","end":"2025-11-01T09:43:58.560773Z","steps":["trace[1216427449] 'process raft request'  (duration: 186.39563ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:43:58.615370Z","caller":"traceutil/trace.go:172","msg":"trace[174755659] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"241.132389ms","start":"2025-11-01T09:43:58.374215Z","end":"2025-11-01T09:43:58.615348Z","steps":["trace[174755659] 'process raft request'  (duration: 240.997752ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:43:58.806311Z","caller":"traceutil/trace.go:172","msg":"trace[2134451426] transaction","detail":"{read_only:false; response_revision:596; number_of_response:1; }","duration":"120.537083ms","start":"2025-11-01T09:43:58.685749Z","end":"2025-11-01T09:43:58.806286Z","steps":["trace[2134451426] 'process raft request'  (duration: 92.467793ms)","trace[2134451426] 'compare'  (duration: 27.961569ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:43:59.441171Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.553458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5\" limit:1 ","response":"range_response_count:1 size:4612"}
	{"level":"info","ts":"2025-11-01T09:43:59.441353Z","caller":"traceutil/trace.go:172","msg":"trace[1126848669] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5; range_end:; response_count:1; response_revision:598; }","duration":"142.748265ms","start":"2025-11-01T09:43:59.298585Z","end":"2025-11-01T09:43:59.441333Z","steps":["trace[1126848669] 'agreement among raft nodes before linearized reading'  (duration: 43.400863ms)","trace[1126848669] 'range keys from in-memory index tree'  (duration: 99.048302ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:43:59.444577Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.462961ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765875784007944 > lease_revoke:<id:5b339a3ecc2e7658>","response":"size:29"}
	{"level":"info","ts":"2025-11-01T09:43:59.444757Z","caller":"traceutil/trace.go:172","msg":"trace[1155981835] transaction","detail":"{read_only:false; response_revision:599; number_of_response:1; }","duration":"145.480109ms","start":"2025-11-01T09:43:59.299268Z","end":"2025-11-01T09:43:59.444748Z","steps":["trace[1155981835] 'process raft request'  (duration: 145.390809ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:44:05.050114Z","caller":"traceutil/trace.go:172","msg":"trace[1699605369] linearizableReadLoop","detail":"{readStateIndex:635; appliedIndex:635; }","duration":"123.18778ms","start":"2025-11-01T09:44:04.926902Z","end":"2025-11-01T09:44:05.050090Z","steps":["trace[1699605369] 'read index received'  (duration: 123.178363ms)","trace[1699605369] 'applied index is now lower than readState.Index'  (duration: 8.181µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:44:05.059163Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.234922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2025-11-01T09:44:05.059328Z","caller":"traceutil/trace.go:172","msg":"trace[1451019433] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:606; }","duration":"132.415063ms","start":"2025-11-01T09:44:04.926896Z","end":"2025-11-01T09:44:05.059311Z","steps":["trace[1451019433] 'agreement among raft nodes before linearized reading'  (duration: 123.311249ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:44:05.059331Z","caller":"traceutil/trace.go:172","msg":"trace[655869371] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"135.794897ms","start":"2025-11-01T09:44:04.923518Z","end":"2025-11-01T09:44:05.059313Z","steps":["trace[655869371] 'process raft request'  (duration: 126.609259ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:44:05.059355Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.499652ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:44:05.059399Z","caller":"traceutil/trace.go:172","msg":"trace[870144663] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:607; }","duration":"124.554626ms","start":"2025-11-01T09:44:04.934835Z","end":"2025-11-01T09:44:05.059390Z","steps":["trace[870144663] 'agreement among raft nodes before linearized reading'  (duration: 124.476565ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:44:05.334942Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.413369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:44:05.335020Z","caller":"traceutil/trace.go:172","msg":"trace[63103734] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:607; }","duration":"106.533273ms","start":"2025-11-01T09:44:05.228472Z","end":"2025-11-01T09:44:05.335005Z","steps":["trace[63103734] 'range keys from in-memory index tree'  (duration: 106.345215ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:44:06.030237Z","caller":"traceutil/trace.go:172","msg":"trace[538418333] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"125.999449ms","start":"2025-11-01T09:44:05.904217Z","end":"2025-11-01T09:44:06.030217Z","steps":["trace[538418333] 'process raft request'  (duration: 125.851371ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:44:34 up  1:26,  0 user,  load average: 8.79, 5.98, 3.53
	Linux embed-certs-214580 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e54db4ff164762c55475c64b60cd58e8006a9d8724b2134ba5420988328409a] <==
	I1101 09:43:46.886286       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:43:46.886585       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 09:43:46.886768       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:43:46.886839       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:43:46.886897       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:43:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:43:47.182829       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:43:47.182854       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:43:47.182865       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:43:47.183057       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:43:47.683822       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:43:47.683859       1 metrics.go:72] Registering metrics
	I1101 09:43:47.683977       1 controller.go:711] "Syncing nftables rules"
	I1101 09:43:57.183098       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:43:57.183191       1 main.go:301] handling current node
	I1101 09:44:07.188047       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:44:07.188077       1 main.go:301] handling current node
	I1101 09:44:17.183053       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:44:17.183085       1 main.go:301] handling current node
	I1101 09:44:27.183158       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:44:27.183198       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e96acc480b4e765646d24acecdd6b0e6543ce1a4ca7a4dfebb2ac4820f369fdc] <==
	I1101 09:43:45.298718       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:43:45.298754       1 policy_source.go:240] refreshing policies
	I1101 09:43:45.303864       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:43:45.307587       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:43:45.313987       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:43:45.314051       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:43:45.314073       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:43:45.314082       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:43:45.314090       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:43:45.342818       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:43:45.343274       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:43:45.343467       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:43:45.344334       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:43:45.344425       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:43:45.773710       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:43:45.811103       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:43:45.836987       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:43:45.848587       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:43:45.857552       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:43:45.918185       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.164.56"}
	I1101 09:43:45.932332       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.179.192"}
	I1101 09:43:46.149889       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:43:48.674190       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:43:48.726942       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:43:49.023301       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [900d5eaf90986af4e504a563b9e25cc937211d9280a58157d415269656f12fe8] <==
	I1101 09:43:48.595795       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:43:48.605155       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:43:48.612438       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:43:48.621324       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:43:48.621360       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:43:48.621379       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:43:48.622327       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:43:48.622436       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:43:48.622570       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:43:48.622698       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-214580"
	I1101 09:43:48.622752       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:43:48.625610       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:43:48.626789       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:43:48.626988       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:43:48.628212       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:43:48.628241       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:43:48.630833       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:43:48.631018       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:43:48.631032       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:43:48.631049       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:43:48.631514       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:43:48.634116       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:43:48.634575       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:43:48.641994       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:43:48.648095       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4afe29f878054c6f745c8446b62728a0f47041b20a9aebe50516a89df2ce3ad4] <==
	I1101 09:43:46.699810       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:43:46.778303       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:43:46.878859       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:43:46.878930       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 09:43:46.879049       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:43:46.904672       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:43:46.904737       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:43:46.911497       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:43:46.912143       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:43:46.912354       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:46.915554       1 config.go:200] "Starting service config controller"
	I1101 09:43:46.915577       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:43:46.915786       1 config.go:309] "Starting node config controller"
	I1101 09:43:46.915809       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:43:46.915818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:43:46.916219       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:43:46.916240       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:43:46.916259       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:43:46.916263       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:43:47.016021       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:43:47.017219       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:43:47.017245       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [44596abc1851041c6cd33df427646452721a1d34c3147c32241a3f38e3af7c91] <==
	I1101 09:43:43.542625       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:43:45.167982       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:43:45.168190       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:43:45.168211       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:43:45.168222       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:43:45.310042       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:43:45.310078       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:45.317219       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:45.317318       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:45.322139       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:43:45.321505       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:43:45.417749       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:43:46 embed-certs-214580 kubelet[713]: I1101 09:43:46.309864     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/234d7bd6-5336-4ec0-8d37-9e59105a6166-lib-modules\") pod \"kube-proxy-49j45\" (UID: \"234d7bd6-5336-4ec0-8d37-9e59105a6166\") " pod="kube-system/kube-proxy-49j45"
	Nov 01 09:43:47 embed-certs-214580 kubelet[713]: I1101 09:43:47.249358     713 scope.go:117] "RemoveContainer" containerID="604c59ebde7d43eda75c4ad48146bec49639d4733d95f23dc69312c970a4a1bb"
	Nov 01 09:43:49 embed-certs-214580 kubelet[713]: I1101 09:43:49.233484     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8827\" (UniqueName: \"kubernetes.io/projected/f08d1bad-7e3f-401e-b29b-25804d0f1324-kube-api-access-b8827\") pod \"dashboard-metrics-scraper-6ffb444bf9-2vxx5\" (UID: \"f08d1bad-7e3f-401e-b29b-25804d0f1324\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5"
	Nov 01 09:43:49 embed-certs-214580 kubelet[713]: I1101 09:43:49.233572     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a1ea5a6d-90cf-47e8-b721-ea8375535952-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-pcx7c\" (UID: \"a1ea5a6d-90cf-47e8-b721-ea8375535952\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcx7c"
	Nov 01 09:43:49 embed-certs-214580 kubelet[713]: I1101 09:43:49.233608     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f08d1bad-7e3f-401e-b29b-25804d0f1324-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2vxx5\" (UID: \"f08d1bad-7e3f-401e-b29b-25804d0f1324\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5"
	Nov 01 09:43:49 embed-certs-214580 kubelet[713]: I1101 09:43:49.233705     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sttnv\" (UniqueName: \"kubernetes.io/projected/a1ea5a6d-90cf-47e8-b721-ea8375535952-kube-api-access-sttnv\") pod \"kubernetes-dashboard-855c9754f9-pcx7c\" (UID: \"a1ea5a6d-90cf-47e8-b721-ea8375535952\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcx7c"
	Nov 01 09:43:55 embed-certs-214580 kubelet[713]: I1101 09:43:55.303515     713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcx7c" podStartSLOduration=1.457659727 podStartE2EDuration="6.303487393s" podCreationTimestamp="2025-11-01 09:43:49 +0000 UTC" firstStartedPulling="2025-11-01 09:43:49.429468005 +0000 UTC m=+7.346463722" lastFinishedPulling="2025-11-01 09:43:54.275295666 +0000 UTC m=+12.192291388" observedRunningTime="2025-11-01 09:43:55.303220789 +0000 UTC m=+13.220216530" watchObservedRunningTime="2025-11-01 09:43:55.303487393 +0000 UTC m=+13.220483132"
	Nov 01 09:43:58 embed-certs-214580 kubelet[713]: I1101 09:43:58.291380     713 scope.go:117] "RemoveContainer" containerID="a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc"
	Nov 01 09:43:59 embed-certs-214580 kubelet[713]: I1101 09:43:59.296785     713 scope.go:117] "RemoveContainer" containerID="a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc"
	Nov 01 09:43:59 embed-certs-214580 kubelet[713]: I1101 09:43:59.296937     713 scope.go:117] "RemoveContainer" containerID="8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b"
	Nov 01 09:43:59 embed-certs-214580 kubelet[713]: E1101 09:43:59.297169     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2vxx5_kubernetes-dashboard(f08d1bad-7e3f-401e-b29b-25804d0f1324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5" podUID="f08d1bad-7e3f-401e-b29b-25804d0f1324"
	Nov 01 09:44:00 embed-certs-214580 kubelet[713]: I1101 09:44:00.301830     713 scope.go:117] "RemoveContainer" containerID="8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b"
	Nov 01 09:44:00 embed-certs-214580 kubelet[713]: E1101 09:44:00.302064     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2vxx5_kubernetes-dashboard(f08d1bad-7e3f-401e-b29b-25804d0f1324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5" podUID="f08d1bad-7e3f-401e-b29b-25804d0f1324"
	Nov 01 09:44:01 embed-certs-214580 kubelet[713]: I1101 09:44:01.654023     713 scope.go:117] "RemoveContainer" containerID="8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b"
	Nov 01 09:44:01 embed-certs-214580 kubelet[713]: E1101 09:44:01.654360     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2vxx5_kubernetes-dashboard(f08d1bad-7e3f-401e-b29b-25804d0f1324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5" podUID="f08d1bad-7e3f-401e-b29b-25804d0f1324"
	Nov 01 09:44:12 embed-certs-214580 kubelet[713]: I1101 09:44:12.206186     713 scope.go:117] "RemoveContainer" containerID="8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b"
	Nov 01 09:44:12 embed-certs-214580 kubelet[713]: I1101 09:44:12.335643     713 scope.go:117] "RemoveContainer" containerID="8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b"
	Nov 01 09:44:12 embed-certs-214580 kubelet[713]: I1101 09:44:12.335965     713 scope.go:117] "RemoveContainer" containerID="d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e"
	Nov 01 09:44:12 embed-certs-214580 kubelet[713]: E1101 09:44:12.336232     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2vxx5_kubernetes-dashboard(f08d1bad-7e3f-401e-b29b-25804d0f1324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5" podUID="f08d1bad-7e3f-401e-b29b-25804d0f1324"
	Nov 01 09:44:21 embed-certs-214580 kubelet[713]: I1101 09:44:21.653837     713 scope.go:117] "RemoveContainer" containerID="d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e"
	Nov 01 09:44:21 embed-certs-214580 kubelet[713]: E1101 09:44:21.654133     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2vxx5_kubernetes-dashboard(f08d1bad-7e3f-401e-b29b-25804d0f1324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5" podUID="f08d1bad-7e3f-401e-b29b-25804d0f1324"
	Nov 01 09:44:32 embed-certs-214580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:44:32 embed-certs-214580 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:44:32 embed-certs-214580 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:44:32 embed-certs-214580 systemd[1]: kubelet.service: Consumed 1.801s CPU time.
	
	
	==> kubernetes-dashboard [2c7e75150e82583057ddfb35cc9f50ac38e6bb51044ed6dc95dae3d75032542c] <==
	2025/11/01 09:43:54 Using namespace: kubernetes-dashboard
	2025/11/01 09:43:54 Using in-cluster config to connect to apiserver
	2025/11/01 09:43:54 Using secret token for csrf signing
	2025/11/01 09:43:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:43:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:43:54 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:43:54 Generating JWE encryption key
	2025/11/01 09:43:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:43:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:43:54 Initializing JWE encryption key from synchronized object
	2025/11/01 09:43:54 Creating in-cluster Sidecar client
	2025/11/01 09:43:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:54 Serving insecurely on HTTP port: 9090
	2025/11/01 09:44:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:54 Starting overwatch
	
	
	==> storage-provisioner [604c59ebde7d43eda75c4ad48146bec49639d4733d95f23dc69312c970a4a1bb] <==
	I1101 09:43:46.652319       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:43:46.654733       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [993f4e82116419f59854864bc1ee5f0cf6ba6320e0b5115d8a1cf328f72a9405] <==
	W1101 09:44:09.092531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:11.101298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:11.105883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:13.109064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:13.114590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:15.118368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:15.124943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:17.128321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:17.133290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.136468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.140536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.144440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.149952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.153833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.158217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:25.161189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:25.168162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:27.172542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:27.178104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:29.181434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:29.185768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:31.188487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:31.194187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:33.197315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:33.201732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-214580 -n embed-certs-214580
E1101 09:44:35.053075  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-214580 -n embed-certs-214580: exit status 2 (365.019448ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-214580 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-214580
helpers_test.go:243: (dbg) docker inspect embed-certs-214580:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0",
	        "Created": "2025-11-01T09:42:23.57612126Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 415461,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:43:35.02236605Z",
	            "FinishedAt": "2025-11-01T09:43:34.058539964Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/hostname",
	        "HostsPath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/hosts",
	        "LogPath": "/var/lib/docker/containers/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0/7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0-json.log",
	        "Name": "/embed-certs-214580",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-214580:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-214580",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7217dfc1b74f5113801b1c7389aa8b19632e2f6eef5d202f8a00027f57d531b0",
	                "LowerDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04e9455ea1d1699fe216eb8b8e927f74478f7a991439644c035a3ed4da30a9be/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-214580",
	                "Source": "/var/lib/docker/volumes/embed-certs-214580/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-214580",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-214580",
	                "name.minikube.sigs.k8s.io": "embed-certs-214580",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6532393ab15dd79755c611e8ef4a73a6e779ce911719985dde87d9464bc34324",
	            "SandboxKey": "/var/run/docker/netns/6532393ab15d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-214580": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:82:ec:7e:49:b3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ef396acdcfefe4b7ce9bad3abfa4446d31948191a9bcabcff15b305b8fa3a9ee",
	                    "EndpointID": "a5faafa98d2a3a3011004aff5d94302e99d44743d391dc56848df81dc09d3bbc",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-214580",
	                        "7217dfc1b74f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-214580 -n embed-certs-214580
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-214580 -n embed-certs-214580: exit status 2 (365.732319ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-214580 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-214580 logs -n 25: (1.130147817s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-224845 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:42 UTC │                     │
	│ stop    │ -p no-preload-224845 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable metrics-server -p embed-certs-214580 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-927869 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ stop    │ -p embed-certs-214580 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ stop    │ -p default-k8s-diff-port-927869 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p no-preload-224845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-214580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ old-k8s-version-106430 image list --format=json                                                                                                                                                                                               │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ pause   │ -p old-k8s-version-106430 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ no-preload-224845 image list --format=json                                                                                                                                                                                                    │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p no-preload-224845 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-722387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ stop    │ -p newest-cni-722387 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ image   │ embed-certs-214580 image list --format=json                                                                                                                                                                                                   │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p embed-certs-214580 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:44:00
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:44:00.823522  422921 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:00.823684  422921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.823696  422921 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:00.823702  422921 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:00.823906  422921 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:00.824429  422921 out.go:368] Setting JSON to false
	I1101 09:44:00.825935  422921 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5179,"bootTime":1761985062,"procs":518,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:44:00.826062  422921 start.go:143] virtualization: kvm guest
	I1101 09:44:00.828080  422921 out.go:179] * [newest-cni-722387] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:44:00.829516  422921 notify.go:221] Checking for updates...
	I1101 09:44:00.829545  422921 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:44:00.831103  422921 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:44:00.832421  422921 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:00.833671  422921 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:44:00.835236  422921 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:44:00.836312  422921 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:44:00.838662  422921 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.838859  422921 config.go:182] Loaded profile config "embed-certs-214580": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.839032  422921 config.go:182] Loaded profile config "no-preload-224845": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:00.839168  422921 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:44:00.868651  422921 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:44:00.868776  422921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:00.932313  422921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:44:00.919582405 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:00.932410  422921 docker.go:319] overlay module found
	I1101 09:44:00.934186  422921 out.go:179] * Using the docker driver based on user configuration
	I1101 09:44:00.935396  422921 start.go:309] selected driver: docker
	I1101 09:44:00.935426  422921 start.go:930] validating driver "docker" against <nil>
	I1101 09:44:00.935441  422921 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:44:00.936076  422921 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:00.998903  422921 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:44:00.988574943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:00.999261  422921 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1101 09:44:00.999309  422921 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1101 09:44:00.999988  422921 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:01.001892  422921 out.go:179] * Using Docker driver with root privileges
	I1101 09:44:01.003008  422921 cni.go:84] Creating CNI manager for ""
	I1101 09:44:01.003093  422921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:01.003109  422921 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 09:44:01.003194  422921 start.go:353] cluster config:
	{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:01.004455  422921 out.go:179] * Starting "newest-cni-722387" primary control-plane node in "newest-cni-722387" cluster
	I1101 09:44:01.005836  422921 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:44:01.007040  422921 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:44:01.008185  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:01.008213  422921 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:44:01.008239  422921 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:44:01.008255  422921 cache.go:59] Caching tarball of preloaded images
	I1101 09:44:01.008363  422921 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:44:01.008379  422921 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:44:01.008553  422921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:01.008588  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json: {Name:mk9b2e752fcdc3711c80d757637de7b71a85dab7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:01.031509  422921 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:44:01.031532  422921 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:44:01.031549  422921 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:44:01.031586  422921 start.go:360] acquireMachinesLock for newest-cni-722387: {Name:mk940a2cf467ead4a4947b13278d9e50da243cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:44:01.031708  422921 start.go:364] duration metric: took 99.393µs to acquireMachinesLock for "newest-cni-722387"
	I1101 09:44:01.031740  422921 start.go:93] Provisioning new machine with config: &{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:44:01.031822  422921 start.go:125] createHost starting for "" (driver="docker")
	W1101 09:44:00.646338  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:02.647289  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:02.540153  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:04.540648  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:01.033898  422921 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1101 09:44:01.034155  422921 start.go:159] libmachine.API.Create for "newest-cni-722387" (driver="docker")
	I1101 09:44:01.034187  422921 client.go:173] LocalClient.Create starting
	I1101 09:44:01.034307  422921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem
	I1101 09:44:01.034359  422921 main.go:143] libmachine: Decoding PEM data...
	I1101 09:44:01.034377  422921 main.go:143] libmachine: Parsing certificate...
	I1101 09:44:01.034445  422921 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem
	I1101 09:44:01.034476  422921 main.go:143] libmachine: Decoding PEM data...
	I1101 09:44:01.034491  422921 main.go:143] libmachine: Parsing certificate...
	I1101 09:44:01.034944  422921 cli_runner.go:164] Run: docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1101 09:44:01.054283  422921 cli_runner.go:211] docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1101 09:44:01.054353  422921 network_create.go:284] running [docker network inspect newest-cni-722387] to gather additional debugging logs...
	I1101 09:44:01.054368  422921 cli_runner.go:164] Run: docker network inspect newest-cni-722387
	W1101 09:44:01.073549  422921 cli_runner.go:211] docker network inspect newest-cni-722387 returned with exit code 1
	I1101 09:44:01.073579  422921 network_create.go:287] error running [docker network inspect newest-cni-722387]: docker network inspect newest-cni-722387: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-722387 not found
	I1101 09:44:01.073594  422921 network_create.go:289] output of [docker network inspect newest-cni-722387]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-722387 not found
	
	** /stderr **
	I1101 09:44:01.073692  422921 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:44:01.093393  422921 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d29bf8504a2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:cd:69:fb:c0:b7} reservation:<nil>}
	I1101 09:44:01.094218  422921 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a4cb229b081d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:ca:6d:0e:f5:7f:54} reservation:<nil>}
	I1101 09:44:01.095202  422921 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-859d00dbc8b9 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:46:da:ec:9f:a9:b4} reservation:<nil>}
	I1101 09:44:01.095784  422921 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-5df57938ba0e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:d2:1b:ab:95:75:01} reservation:<nil>}
	I1101 09:44:01.096312  422921 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-fd9ea47f5997 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:7a:e6:71:2c:14:ef} reservation:<nil>}
	I1101 09:44:01.096837  422921 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-ef396acdcfef IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:66:2e:03:68:3f:bb} reservation:<nil>}
	I1101 09:44:01.097629  422921 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1da70}
	I1101 09:44:01.097655  422921 network_create.go:124] attempt to create docker network newest-cni-722387 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1101 09:44:01.097704  422921 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-722387 newest-cni-722387
	I1101 09:44:01.177766  422921 network_create.go:108] docker network newest-cni-722387 192.168.103.0/24 created
	I1101 09:44:01.177827  422921 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-722387" container
	I1101 09:44:01.177901  422921 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1101 09:44:01.199194  422921 cli_runner.go:164] Run: docker volume create newest-cni-722387 --label name.minikube.sigs.k8s.io=newest-cni-722387 --label created_by.minikube.sigs.k8s.io=true
	I1101 09:44:01.221436  422921 oci.go:103] Successfully created a docker volume newest-cni-722387
	I1101 09:44:01.221600  422921 cli_runner.go:164] Run: docker run --rm --name newest-cni-722387-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-722387 --entrypoint /usr/bin/test -v newest-cni-722387:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1101 09:44:01.677464  422921 oci.go:107] Successfully prepared a docker volume newest-cni-722387
	I1101 09:44:01.677514  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:01.677544  422921 kic.go:194] Starting extracting preloaded images to volume ...
	I1101 09:44:01.677623  422921 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-722387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1101 09:44:05.146749  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:07.148682  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:09.647053  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:07.041096  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:09.539685  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:06.398607  422921 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-722387:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.720922389s)
	I1101 09:44:06.398651  422921 kic.go:203] duration metric: took 4.721100224s to extract preloaded images to volume ...
	W1101 09:44:06.398758  422921 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1101 09:44:06.398800  422921 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1101 09:44:06.398852  422921 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1101 09:44:06.465541  422921 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-722387 --name newest-cni-722387 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-722387 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-722387 --network newest-cni-722387 --ip 192.168.103.2 --volume newest-cni-722387:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1101 09:44:06.805538  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Running}}
	I1101 09:44:06.828089  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:06.851749  422921 cli_runner.go:164] Run: docker exec newest-cni-722387 stat /var/lib/dpkg/alternatives/iptables
	I1101 09:44:06.904120  422921 oci.go:144] the created container "newest-cni-722387" has a running status.
	I1101 09:44:06.904157  422921 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa...
	I1101 09:44:07.001848  422921 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1101 09:44:07.037979  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:07.065271  422921 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1101 09:44:07.065301  422921 kic_runner.go:114] Args: [docker exec --privileged newest-cni-722387 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1101 09:44:07.117749  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:07.141576  422921 machine.go:94] provisionDockerMachine start ...
	I1101 09:44:07.141692  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:07.170345  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:07.170754  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:07.171294  422921 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:44:07.329839  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-722387
	
	I1101 09:44:07.329873  422921 ubuntu.go:182] provisioning hostname "newest-cni-722387"
	I1101 09:44:07.329971  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:07.351602  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:07.351850  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:07.351866  422921 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-722387 && echo "newest-cni-722387" | sudo tee /etc/hostname
	I1101 09:44:07.513163  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-722387
	
	I1101 09:44:07.513257  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:07.536121  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:07.536418  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:07.536455  422921 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-722387' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-722387/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-722387' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:44:07.690179  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:44:07.690213  422921 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:44:07.690235  422921 ubuntu.go:190] setting up certificates
	I1101 09:44:07.690247  422921 provision.go:84] configureAuth start
	I1101 09:44:07.690303  422921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:07.710388  422921 provision.go:143] copyHostCerts
	I1101 09:44:07.710461  422921 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:44:07.710477  422921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:44:07.710559  422921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:44:07.710683  422921 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:44:07.710693  422921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:44:07.710734  422921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:44:07.710817  422921 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:44:07.710827  422921 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:44:07.710863  422921 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:44:07.710954  422921 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.newest-cni-722387 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-722387]
	I1101 09:44:08.842065  422921 provision.go:177] copyRemoteCerts
	I1101 09:44:08.842134  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:44:08.842180  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:08.862777  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:08.967012  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:44:08.987471  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:44:09.005392  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:44:09.024170  422921 provision.go:87] duration metric: took 1.333906879s to configureAuth
	I1101 09:44:09.024208  422921 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:44:09.024391  422921 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:09.024511  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.046693  422921 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:09.046953  422921 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1101 09:44:09.046976  422921 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:44:09.318902  422921 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:44:09.318969  422921 machine.go:97] duration metric: took 2.177370299s to provisionDockerMachine
	I1101 09:44:09.318981  422921 client.go:176] duration metric: took 8.284787176s to LocalClient.Create
	I1101 09:44:09.319007  422921 start.go:167] duration metric: took 8.284854636s to libmachine.API.Create "newest-cni-722387"
	I1101 09:44:09.319021  422921 start.go:293] postStartSetup for "newest-cni-722387" (driver="docker")
	I1101 09:44:09.319035  422921 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:44:09.319106  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:44:09.319169  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.339325  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.443792  422921 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:44:09.447954  422921 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:44:09.447981  422921 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:44:09.448002  422921 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:44:09.448066  422921 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:44:09.448161  422921 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:44:09.448269  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:44:09.457217  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:44:09.478393  422921 start.go:296] duration metric: took 159.356449ms for postStartSetup
	I1101 09:44:09.478781  422921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:09.497615  422921 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:09.497880  422921 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:44:09.497971  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.516558  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.616534  422921 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:44:09.621184  422921 start.go:128] duration metric: took 8.589348109s to createHost
	I1101 09:44:09.621206  422921 start.go:83] releasing machines lock for "newest-cni-722387", held for 8.589483705s
	I1101 09:44:09.621261  422921 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:09.639137  422921 ssh_runner.go:195] Run: cat /version.json
	I1101 09:44:09.639152  422921 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:44:09.639193  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.639227  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:09.659576  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.660064  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:09.813835  422921 ssh_runner.go:195] Run: systemctl --version
	I1101 09:44:09.820702  422921 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:44:09.859165  422921 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:44:09.863899  422921 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:44:09.863990  422921 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:44:09.891587  422921 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1101 09:44:09.891611  422921 start.go:496] detecting cgroup driver to use...
	I1101 09:44:09.891642  422921 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:44:09.891685  422921 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:44:09.908170  422921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:44:09.920701  422921 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:44:09.920762  422921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:44:09.939277  422921 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:44:09.958203  422921 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:44:10.041329  422921 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:44:10.138596  422921 docker.go:234] disabling docker service ...
	I1101 09:44:10.138674  422921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:44:10.162388  422921 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:44:10.183310  422921 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:44:10.277717  422921 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:44:10.364259  422921 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:44:10.377455  422921 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:44:10.392986  422921 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:44:10.393061  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.404147  422921 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:44:10.404225  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.414290  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.424717  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.434248  422921 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:44:10.444846  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.466459  422921 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.491214  422921 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:10.504176  422921 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:44:10.514111  422921 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:44:10.522737  422921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:10.603998  422921 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:44:10.745956  422921 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:44:10.746039  422921 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:44:10.750485  422921 start.go:564] Will wait 60s for crictl version
	I1101 09:44:10.750549  422921 ssh_runner.go:195] Run: which crictl
	I1101 09:44:10.754696  422921 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:44:10.782770  422921 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:44:10.782858  422921 ssh_runner.go:195] Run: crio --version
	I1101 09:44:10.814129  422921 ssh_runner.go:195] Run: crio --version
	I1101 09:44:10.842831  422921 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:44:10.845742  422921 cli_runner.go:164] Run: docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:44:10.865977  422921 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 09:44:10.870737  422921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:44:10.886253  422921 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 09:44:10.888153  422921 kubeadm.go:884] updating cluster {Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:44:10.888347  422921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:10.888429  422921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:44:10.923317  422921 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:44:10.923339  422921 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:44:10.923383  422921 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:44:10.954698  422921 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:44:10.954725  422921 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:44:10.954734  422921 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1101 09:44:10.954838  422921 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-722387 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:44:10.954936  422921 ssh_runner.go:195] Run: crio config
	I1101 09:44:11.004449  422921 cni.go:84] Creating CNI manager for ""
	I1101 09:44:11.004473  422921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:11.004494  422921 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 09:44:11.004527  422921 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-722387 NodeName:newest-cni-722387 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:44:11.004682  422921 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-722387"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:44:11.004760  422921 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:44:11.014530  422921 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:44:11.014605  422921 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:44:11.024541  422921 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 09:44:11.040294  422921 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:44:11.062728  422921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 09:44:11.077519  422921 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:44:11.081688  422921 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:44:11.094974  422921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:11.196048  422921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:44:11.218864  422921 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387 for IP: 192.168.103.2
	I1101 09:44:11.218886  422921 certs.go:195] generating shared ca certs ...
	I1101 09:44:11.218905  422921 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.219079  422921 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:44:11.219129  422921 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:44:11.219137  422921 certs.go:257] generating profile certs ...
	I1101 09:44:11.219206  422921 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.key
	I1101 09:44:11.219226  422921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.crt with IP's: []
	I1101 09:44:11.461428  422921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.crt ...
	I1101 09:44:11.461455  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.crt: {Name:mka26fe91724530410954f0cb0f760186d382fac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.461645  422921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.key ...
	I1101 09:44:11.461660  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.key: {Name:mkfd8769aff14fe4cbc98be403d7018408109ab6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.462191  422921 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae
	I1101 09:44:11.462211  422921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1101 09:44:11.868383  422921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae ...
	I1101 09:44:11.868418  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae: {Name:mk4f413fba17a26ebf9c87bc9593ce90dfb89ffe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.868634  422921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae ...
	I1101 09:44:11.868654  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae: {Name:mk80e9deceb79e9196c5e16230d90849359b0914 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:11.868766  422921 certs.go:382] copying /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt.9a1cecae -> /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt
	I1101 09:44:11.868896  422921 certs.go:386] copying /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae -> /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key
	I1101 09:44:11.869007  422921 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key
	I1101 09:44:11.869030  422921 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt with IP's: []
	I1101 09:44:12.000122  422921 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt ...
	I1101 09:44:12.000158  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt: {Name:mk2ad63222a5177d2492cb7d1ba84a51f7e11b2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:12.000356  422921 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key ...
	I1101 09:44:12.000380  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key: {Name:mk162ecceb80bafb66ce5e25b61bc5c04bab15ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:12.000606  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:44:12.000657  422921 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:44:12.000669  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:44:12.000700  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:44:12.000738  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:44:12.000771  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:44:12.000830  422921 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:44:12.001938  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:44:12.022650  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:44:12.042868  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:44:12.061963  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:44:12.080887  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:44:12.100615  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:44:12.120330  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:44:12.141267  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:44:12.163343  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:44:12.186827  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:44:12.208773  422921 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:44:12.230567  422921 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:44:12.246541  422921 ssh_runner.go:195] Run: openssl version
	I1101 09:44:12.253877  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:44:12.264673  422921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:12.270201  422921 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:12.270261  422921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:12.306242  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:44:12.315679  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:44:12.325013  422921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:44:12.329129  422921 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:44:12.329188  422921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:44:12.371362  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:44:12.380856  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:44:12.390756  422921 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:44:12.394977  422921 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:44:12.395045  422921 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:44:12.432336  422921 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:44:12.442783  422921 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:44:12.446834  422921 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1101 09:44:12.446922  422921 kubeadm.go:401] StartCluster: {Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:12.447025  422921 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:44:12.447084  422921 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:44:12.476859  422921 cri.go:89] found id: ""
	I1101 09:44:12.476960  422921 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:44:12.485947  422921 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1101 09:44:12.495118  422921 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1101 09:44:12.495191  422921 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1101 09:44:12.503837  422921 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1101 09:44:12.503858  422921 kubeadm.go:158] found existing configuration files:
	
	I1101 09:44:12.503904  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1101 09:44:12.513028  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1101 09:44:12.513123  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1101 09:44:12.521312  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1101 09:44:12.529973  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1101 09:44:12.530061  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1101 09:44:12.539012  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1101 09:44:12.547839  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1101 09:44:12.547902  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1101 09:44:12.555437  422921 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1101 09:44:12.563284  422921 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1101 09:44:12.563337  422921 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1101 09:44:12.571308  422921 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1101 09:44:12.610706  422921 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1101 09:44:12.610775  422921 kubeadm.go:319] [preflight] Running pre-flight checks
	I1101 09:44:12.633149  422921 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1101 09:44:12.633212  422921 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1101 09:44:12.633239  422921 kubeadm.go:319] OS: Linux
	I1101 09:44:12.633299  422921 kubeadm.go:319] CGROUPS_CPU: enabled
	I1101 09:44:12.633366  422921 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1101 09:44:12.633462  422921 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1101 09:44:12.633544  422921 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1101 09:44:12.633643  422921 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1101 09:44:12.633730  422921 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1101 09:44:12.633795  422921 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1101 09:44:12.633869  422921 kubeadm.go:319] CGROUPS_IO: enabled
	I1101 09:44:12.697497  422921 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1101 09:44:12.697699  422921 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1101 09:44:12.697851  422921 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1101 09:44:12.706690  422921 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1101 09:44:11.647095  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:14.146846  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	W1101 09:44:12.040114  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:14.539447  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:12.709509  422921 out.go:252]   - Generating certificates and keys ...
	I1101 09:44:12.709606  422921 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1101 09:44:12.709733  422921 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1101 09:44:12.854194  422921 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1101 09:44:13.380816  422921 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1101 09:44:13.429834  422921 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1101 09:44:13.579950  422921 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1101 09:44:13.897795  422921 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1101 09:44:13.898003  422921 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-722387] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:44:14.063204  422921 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1101 09:44:14.063358  422921 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-722387] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1101 09:44:14.595857  422921 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1101 09:44:14.864904  422921 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1101 09:44:14.974817  422921 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1101 09:44:14.974965  422921 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1101 09:44:15.411035  422921 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1101 09:44:15.738220  422921 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1101 09:44:16.163033  422921 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1101 09:44:16.379713  422921 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1101 09:44:16.656283  422921 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1101 09:44:16.656703  422921 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1101 09:44:16.660761  422921 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1101 09:44:16.646757  415212 pod_ready.go:104] pod "coredns-66bc5c9577-cmnj8" is not "Ready", error: <nil>
	I1101 09:44:18.647241  415212 pod_ready.go:94] pod "coredns-66bc5c9577-cmnj8" is "Ready"
	I1101 09:44:18.647288  415212 pod_ready.go:86] duration metric: took 32.006837487s for pod "coredns-66bc5c9577-cmnj8" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.650658  415212 pod_ready.go:83] waiting for pod "etcd-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.655619  415212 pod_ready.go:94] pod "etcd-embed-certs-214580" is "Ready"
	I1101 09:44:18.655650  415212 pod_ready.go:86] duration metric: took 4.963735ms for pod "etcd-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.658523  415212 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.664301  415212 pod_ready.go:94] pod "kube-apiserver-embed-certs-214580" is "Ready"
	I1101 09:44:18.664329  415212 pod_ready.go:86] duration metric: took 5.774053ms for pod "kube-apiserver-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.666532  415212 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:18.845825  415212 pod_ready.go:94] pod "kube-controller-manager-embed-certs-214580" is "Ready"
	I1101 09:44:18.845858  415212 pod_ready.go:86] duration metric: took 179.302458ms for pod "kube-controller-manager-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:19.045094  415212 pod_ready.go:83] waiting for pod "kube-proxy-49j45" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:19.445427  415212 pod_ready.go:94] pod "kube-proxy-49j45" is "Ready"
	I1101 09:44:19.445528  415212 pod_ready.go:86] duration metric: took 400.403346ms for pod "kube-proxy-49j45" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:19.645441  415212 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:20.044873  415212 pod_ready.go:94] pod "kube-scheduler-embed-certs-214580" is "Ready"
	I1101 09:44:20.044906  415212 pod_ready.go:86] duration metric: took 399.441ms for pod "kube-scheduler-embed-certs-214580" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:20.044958  415212 pod_ready.go:40] duration metric: took 33.416946486s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:44:20.092487  415212 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:44:20.094263  415212 out.go:179] * Done! kubectl is now configured to use "embed-certs-214580" cluster and "default" namespace by default
	W1101 09:44:17.039371  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:19.039784  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:16.662416  422921 out.go:252]   - Booting up control plane ...
	I1101 09:44:16.662552  422921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1101 09:44:16.662673  422921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1101 09:44:16.663362  422921 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1101 09:44:16.678425  422921 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1101 09:44:16.678561  422921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1101 09:44:16.687747  422921 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1101 09:44:16.688059  422921 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1101 09:44:16.688132  422921 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1101 09:44:16.797757  422921 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1101 09:44:16.797944  422921 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1101 09:44:17.299548  422921 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.906008ms
	I1101 09:44:17.303198  422921 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1101 09:44:17.303364  422921 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1101 09:44:17.303521  422921 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1101 09:44:17.303650  422921 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1101 09:44:18.761873  422921 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.458638771s
	I1101 09:44:19.840176  422921 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.53698671s
	I1101 09:44:21.304575  422921 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001294509s
	I1101 09:44:21.315500  422921 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1101 09:44:21.326477  422921 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1101 09:44:21.336740  422921 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1101 09:44:21.337047  422921 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-722387 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1101 09:44:21.345651  422921 kubeadm.go:319] [bootstrap-token] Using token: hcqanb.hb6jvis691nmk76a
	I1101 09:44:21.347127  422921 out.go:252]   - Configuring RBAC rules ...
	I1101 09:44:21.347291  422921 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1101 09:44:21.350981  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1101 09:44:21.357244  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1101 09:44:21.360342  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1101 09:44:21.364309  422921 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1101 09:44:21.367361  422921 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1101 09:44:21.712031  422921 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1101 09:44:22.130627  422921 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1101 09:44:22.711154  422921 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1101 09:44:22.712086  422921 kubeadm.go:319] 
	I1101 09:44:22.712150  422921 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1101 09:44:22.712177  422921 kubeadm.go:319] 
	I1101 09:44:22.712290  422921 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1101 09:44:22.712300  422921 kubeadm.go:319] 
	I1101 09:44:22.712336  422921 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1101 09:44:22.712412  422921 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1101 09:44:22.712476  422921 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1101 09:44:22.712501  422921 kubeadm.go:319] 
	I1101 09:44:22.712589  422921 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1101 09:44:22.712599  422921 kubeadm.go:319] 
	I1101 09:44:22.712661  422921 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1101 09:44:22.712670  422921 kubeadm.go:319] 
	I1101 09:44:22.712714  422921 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1101 09:44:22.712814  422921 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1101 09:44:22.712954  422921 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1101 09:44:22.712966  422921 kubeadm.go:319] 
	I1101 09:44:22.713076  422921 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1101 09:44:22.713147  422921 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1101 09:44:22.713153  422921 kubeadm.go:319] 
	I1101 09:44:22.713233  422921 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hcqanb.hb6jvis691nmk76a \
	I1101 09:44:22.713362  422921 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 \
	I1101 09:44:22.713400  422921 kubeadm.go:319] 	--control-plane 
	I1101 09:44:22.713409  422921 kubeadm.go:319] 
	I1101 09:44:22.713509  422921 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1101 09:44:22.713529  422921 kubeadm.go:319] 
	I1101 09:44:22.713633  422921 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hcqanb.hb6jvis691nmk76a \
	I1101 09:44:22.713766  422921 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:72d862efd6f702d2cd9b2903f9c615887f85516be0adee91c928b93e1ed5dae8 
	I1101 09:44:22.716577  422921 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1101 09:44:22.716763  422921 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1101 09:44:22.716802  422921 cni.go:84] Creating CNI manager for ""
	I1101 09:44:22.716816  422921 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:22.719006  422921 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1101 09:44:21.539320  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	W1101 09:44:23.539726  415823 pod_ready.go:104] pod "coredns-66bc5c9577-mlk9t" is not "Ready", error: <nil>
	I1101 09:44:24.539003  415823 pod_ready.go:94] pod "coredns-66bc5c9577-mlk9t" is "Ready"
	I1101 09:44:24.539036  415823 pod_ready.go:86] duration metric: took 37.005664079s for pod "coredns-66bc5c9577-mlk9t" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.542034  415823 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.546171  415823 pod_ready.go:94] pod "etcd-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:24.546202  415823 pod_ready.go:86] duration metric: took 4.14183ms for pod "etcd-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.548297  415823 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.552057  415823 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:24.552085  415823 pod_ready.go:86] duration metric: took 3.765443ms for pod "kube-apiserver-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.553877  415823 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.737469  415823 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:24.737500  415823 pod_ready.go:86] duration metric: took 183.602214ms for pod "kube-controller-manager-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:24.936875  415823 pod_ready.go:83] waiting for pod "kube-proxy-dszvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:25.337704  415823 pod_ready.go:94] pod "kube-proxy-dszvg" is "Ready"
	I1101 09:44:25.337740  415823 pod_ready.go:86] duration metric: took 400.799752ms for pod "kube-proxy-dszvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:25.537478  415823 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:22.720310  422921 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1101 09:44:22.724894  422921 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1101 09:44:22.724941  422921 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1101 09:44:22.738693  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1101 09:44:22.952858  422921 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1101 09:44:22.952950  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:22.952991  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-722387 minikube.k8s.io/updated_at=2025_11_01T09_44_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7 minikube.k8s.io/name=newest-cni-722387 minikube.k8s.io/primary=true
	I1101 09:44:23.035461  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:23.035594  422921 ops.go:34] apiserver oom_adj: -16
	I1101 09:44:23.536240  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:24.035722  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:24.536107  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:25.035835  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:25.535832  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:25.937251  415823 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-927869" is "Ready"
	I1101 09:44:25.937281  415823 pod_ready.go:86] duration metric: took 399.779095ms for pod "kube-scheduler-default-k8s-diff-port-927869" in "kube-system" namespace to be "Ready" or be gone ...
	I1101 09:44:25.937293  415823 pod_ready.go:40] duration metric: took 38.410135058s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1101 09:44:25.982726  415823 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:44:25.985490  415823 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-927869" cluster and "default" namespace by default
	I1101 09:44:26.036048  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:26.536576  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:27.035529  422921 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1101 09:44:27.106742  422921 kubeadm.go:1114] duration metric: took 4.153880162s to wait for elevateKubeSystemPrivileges
	I1101 09:44:27.106782  422921 kubeadm.go:403] duration metric: took 14.659875744s to StartCluster
	I1101 09:44:27.106806  422921 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:27.106895  422921 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:27.108666  422921 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:27.108895  422921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1101 09:44:27.108939  422921 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:44:27.109008  422921 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-722387"
	I1101 09:44:27.109025  422921 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-722387"
	I1101 09:44:27.108892  422921 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:44:27.109049  422921 addons.go:70] Setting default-storageclass=true in profile "newest-cni-722387"
	I1101 09:44:27.109061  422921 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:27.109082  422921 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-722387"
	I1101 09:44:27.109130  422921 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:27.109483  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:27.109621  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:27.111555  422921 out.go:179] * Verifying Kubernetes components...
	I1101 09:44:27.113019  422921 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:27.134154  422921 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:44:27.135067  422921 addons.go:239] Setting addon default-storageclass=true in "newest-cni-722387"
	I1101 09:44:27.135105  422921 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:27.135433  422921 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:27.135476  422921 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:44:27.135497  422921 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:44:27.135550  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:27.165881  422921 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:44:27.165927  422921 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:44:27.165992  422921 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:27.167165  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:27.193516  422921 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:27.208174  422921 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1101 09:44:27.256046  422921 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:44:27.286530  422921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:44:27.309898  422921 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:44:27.399415  422921 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1101 09:44:27.401255  422921 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:44:27.401316  422921 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:44:27.609302  422921 api_server.go:72] duration metric: took 500.236103ms to wait for apiserver process to appear ...
	I1101 09:44:27.609331  422921 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:44:27.609359  422921 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:27.614891  422921 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 09:44:27.615772  422921 api_server.go:141] control plane version: v1.34.1
	I1101 09:44:27.615796  422921 api_server.go:131] duration metric: took 6.458373ms to wait for apiserver health ...
	I1101 09:44:27.615804  422921 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:44:27.616498  422921 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1101 09:44:27.618332  422921 system_pods.go:59] 7 kube-system pods found
	I1101 09:44:27.618371  422921 system_pods.go:61] "etcd-newest-cni-722387" [db6d9615-3fd5-4642-abb7-9c060c90d98e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:44:27.618366  422921 addons.go:515] duration metric: took 509.425146ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1101 09:44:27.618380  422921 system_pods.go:61] "kindnet-vq8r5" [0e3ba1a9-d43e-4944-bd85-a7858465eeb5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:44:27.618390  422921 system_pods.go:61] "kube-apiserver-newest-cni-722387" [8e6d728a-c7de-4b60-8627-f4e2729f14b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:44:27.618398  422921 system_pods.go:61] "kube-controller-manager-newest-cni-722387" [a0094ce2-c3fe-4f6f-9f2b-7d9871577296] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:44:27.618404  422921 system_pods.go:61] "kube-proxy-rxnwv" [b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:44:27.618412  422921 system_pods.go:61] "kube-scheduler-newest-cni-722387" [8c1c8755-a1ca-4aa2-894c-b7ae1e5f1ab6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:44:27.618418  422921 system_pods.go:61] "storage-provisioner" [cca90c7a-0f05-4855-ba4d-530a67715840] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:44:27.618426  422921 system_pods.go:74] duration metric: took 2.615581ms to wait for pod list to return data ...
	I1101 09:44:27.618435  422921 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:44:27.620461  422921 default_sa.go:45] found service account: "default"
	I1101 09:44:27.620483  422921 default_sa.go:55] duration metric: took 2.03963ms for default service account to be created ...
	I1101 09:44:27.620500  422921 kubeadm.go:587] duration metric: took 511.436014ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:27.620522  422921 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:44:27.624060  422921 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:44:27.624099  422921 node_conditions.go:123] node cpu capacity is 8
	I1101 09:44:27.624117  422921 node_conditions.go:105] duration metric: took 3.590038ms to run NodePressure ...
	I1101 09:44:27.624134  422921 start.go:242] waiting for startup goroutines ...
	I1101 09:44:27.905064  422921 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-722387" context rescaled to 1 replicas
	I1101 09:44:27.905103  422921 start.go:247] waiting for cluster config update ...
	I1101 09:44:27.905115  422921 start.go:256] writing updated cluster config ...
	I1101 09:44:27.905522  422921 ssh_runner.go:195] Run: rm -f paused
	I1101 09:44:27.956603  422921 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:44:27.958676  422921 out.go:179] * Done! kubectl is now configured to use "newest-cni-722387" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:43:57 embed-certs-214580 crio[561]: time="2025-11-01T09:43:57.665027654Z" level=info msg="Created container a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=b1c91bf5-8f01-4ab2-ab81-cc7d3c3c1156 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:57 embed-certs-214580 crio[561]: time="2025-11-01T09:43:57.665896795Z" level=info msg="Starting container: a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc" id=6eed1cfb-8010-4e7a-8140-c4feaaebf87a name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:57 embed-certs-214580 crio[561]: time="2025-11-01T09:43:57.668037389Z" level=info msg="Started container" PID=1732 containerID=a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper id=6eed1cfb-8010-4e7a-8140-c4feaaebf87a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a36b64f22c4e929c9972fdb657313aeae65ba1939b14851263c22f5754be603
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.291827735Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=761432ea-fde2-4146-8368-8fd288480602 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.302750099Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4ecfb0b8-3284-4943-8069-2f1c4cc1491c name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.305817519Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=ede2478a-a151-4918-ba49-6bf0dafffb0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.305960146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.371796735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.372542375Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.615333933Z" level=info msg="Created container 8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=ede2478a-a151-4918-ba49-6bf0dafffb0b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.616664658Z" level=info msg="Starting container: 8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b" id=6ad10c82-7ba3-4877-8e47-a0bcc7a964a8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:58 embed-certs-214580 crio[561]: time="2025-11-01T09:43:58.619266706Z" level=info msg="Started container" PID=1741 containerID=8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper id=6ad10c82-7ba3-4877-8e47-a0bcc7a964a8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a36b64f22c4e929c9972fdb657313aeae65ba1939b14851263c22f5754be603
	Nov 01 09:43:59 embed-certs-214580 crio[561]: time="2025-11-01T09:43:59.298349359Z" level=info msg="Removing container: a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc" id=7851930b-d862-46ed-a6f6-c714dce36133 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:43:59 embed-certs-214580 crio[561]: time="2025-11-01T09:43:59.52498389Z" level=info msg="Removed container a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=7851930b-d862-46ed-a6f6-c714dce36133 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.206624356Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=23cc060f-c284-4d8c-9809-74b7c7162b72 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.207480809Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7fb24d1c-ce77-474f-b2d9-5b4f5f2bd50e name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.208583803Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=76efb056-84a4-474a-b8e9-ac52a0bf2a94 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.208783474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.215392811Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.21619355Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.243688716Z" level=info msg="Created container d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=76efb056-84a4-474a-b8e9-ac52a0bf2a94 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.244567927Z" level=info msg="Starting container: d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e" id=7921f00e-371f-4e37-8920-631c91a65ada name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.246788884Z" level=info msg="Started container" PID=1757 containerID=d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper id=7921f00e-371f-4e37-8920-631c91a65ada name=/runtime.v1.RuntimeService/StartContainer sandboxID=3a36b64f22c4e929c9972fdb657313aeae65ba1939b14851263c22f5754be603
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.337284369Z" level=info msg="Removing container: 8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b" id=46b1e771-bfb8-4d1a-a602-f151d028f12c name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:44:12 embed-certs-214580 crio[561]: time="2025-11-01T09:44:12.348459966Z" level=info msg="Removed container 8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5/dashboard-metrics-scraper" id=46b1e771-bfb8-4d1a-a602-f151d028f12c name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	d03cec41bb10f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   3a36b64f22c4e       dashboard-metrics-scraper-6ffb444bf9-2vxx5   kubernetes-dashboard
	2c7e75150e825       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   1bfb7f94d5940       kubernetes-dashboard-855c9754f9-pcx7c        kubernetes-dashboard
	993f4e8211641       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Running             storage-provisioner         1                   851b671a1407e       storage-provisioner                          kube-system
	a00a0012e7f0b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   a3d2d38ff20d8       busybox                                      default
	94bfc341f8803       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   f43044105fbe4       coredns-66bc5c9577-cmnj8                     kube-system
	604c59ebde7d4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   851b671a1407e       storage-provisioner                          kube-system
	4e54db4ff1647       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   579c33216b458       kindnet-v28lz                                kube-system
	4afe29f878054       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           49 seconds ago      Running             kube-proxy                  0                   736f4ca58df0a       kube-proxy-49j45                             kube-system
	92f3e97dd2f0d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   d002132176b94       etcd-embed-certs-214580                      kube-system
	900d5eaf90986       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   14039442d974c       kube-controller-manager-embed-certs-214580   kube-system
	e96acc480b4e7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   0d61740d15c5c       kube-apiserver-embed-certs-214580            kube-system
	44596abc18510       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   4ce64542c59d1       kube-scheduler-embed-certs-214580            kube-system
	
	
	==> coredns [94bfc341f880370946fcac7fd5ce45c7861054b53499632f386ed99e3432d6c2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33398 - 39048 "HINFO IN 4650601798089357910.1429863867643008636. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.057085215s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-214580
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-214580
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=embed-certs-214580
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_42_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:42:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-214580
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:44:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:43:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-214580
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d2ac0cbf-eedb-40ea-a447-534bb7a6586c
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-cmnj8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-214580                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-v28lz                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-214580             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-214580    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-49j45                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-214580             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2vxx5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-pcx7c         0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node embed-certs-214580 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node embed-certs-214580 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node embed-certs-214580 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node embed-certs-214580 event: Registered Node embed-certs-214580 in Controller
	  Normal  NodeReady                95s                kubelet          Node embed-certs-214580 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node embed-certs-214580 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node embed-certs-214580 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node embed-certs-214580 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node embed-certs-214580 event: Registered Node embed-certs-214580 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [92f3e97dd2f0dfb87caf1169f059e045ee0bba63017d45c00279b75a85b35dd1] <==
	{"level":"warn","ts":"2025-11-01T09:43:44.347584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.364239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.374741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.390483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.420577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.433065Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.456404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.463238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:44.529996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47776","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:43:58.560791Z","caller":"traceutil/trace.go:172","msg":"trace[1216427449] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"186.558316ms","start":"2025-11-01T09:43:58.374215Z","end":"2025-11-01T09:43:58.560773Z","steps":["trace[1216427449] 'process raft request'  (duration: 186.39563ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:43:58.615370Z","caller":"traceutil/trace.go:172","msg":"trace[174755659] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"241.132389ms","start":"2025-11-01T09:43:58.374215Z","end":"2025-11-01T09:43:58.615348Z","steps":["trace[174755659] 'process raft request'  (duration: 240.997752ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:43:58.806311Z","caller":"traceutil/trace.go:172","msg":"trace[2134451426] transaction","detail":"{read_only:false; response_revision:596; number_of_response:1; }","duration":"120.537083ms","start":"2025-11-01T09:43:58.685749Z","end":"2025-11-01T09:43:58.806286Z","steps":["trace[2134451426] 'process raft request'  (duration: 92.467793ms)","trace[2134451426] 'compare'  (duration: 27.961569ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:43:59.441171Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"142.553458ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5\" limit:1 ","response":"range_response_count:1 size:4612"}
	{"level":"info","ts":"2025-11-01T09:43:59.441353Z","caller":"traceutil/trace.go:172","msg":"trace[1126848669] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5; range_end:; response_count:1; response_revision:598; }","duration":"142.748265ms","start":"2025-11-01T09:43:59.298585Z","end":"2025-11-01T09:43:59.441333Z","steps":["trace[1126848669] 'agreement among raft nodes before linearized reading'  (duration: 43.400863ms)","trace[1126848669] 'range keys from in-memory index tree'  (duration: 99.048302ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:43:59.444577Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.462961ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765875784007944 > lease_revoke:<id:5b339a3ecc2e7658>","response":"size:29"}
	{"level":"info","ts":"2025-11-01T09:43:59.444757Z","caller":"traceutil/trace.go:172","msg":"trace[1155981835] transaction","detail":"{read_only:false; response_revision:599; number_of_response:1; }","duration":"145.480109ms","start":"2025-11-01T09:43:59.299268Z","end":"2025-11-01T09:43:59.444748Z","steps":["trace[1155981835] 'process raft request'  (duration: 145.390809ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:44:05.050114Z","caller":"traceutil/trace.go:172","msg":"trace[1699605369] linearizableReadLoop","detail":"{readStateIndex:635; appliedIndex:635; }","duration":"123.18778ms","start":"2025-11-01T09:44:04.926902Z","end":"2025-11-01T09:44:05.050090Z","steps":["trace[1699605369] 'read index received'  (duration: 123.178363ms)","trace[1699605369] 'applied index is now lower than readState.Index'  (duration: 8.181µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-01T09:44:05.059163Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.234922ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" limit:1 ","response":"range_response_count:1 size:1118"}
	{"level":"info","ts":"2025-11-01T09:44:05.059328Z","caller":"traceutil/trace.go:172","msg":"trace[1451019433] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:606; }","duration":"132.415063ms","start":"2025-11-01T09:44:04.926896Z","end":"2025-11-01T09:44:05.059311Z","steps":["trace[1451019433] 'agreement among raft nodes before linearized reading'  (duration: 123.311249ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:44:05.059331Z","caller":"traceutil/trace.go:172","msg":"trace[655869371] transaction","detail":"{read_only:false; response_revision:607; number_of_response:1; }","duration":"135.794897ms","start":"2025-11-01T09:44:04.923518Z","end":"2025-11-01T09:44:05.059313Z","steps":["trace[655869371] 'process raft request'  (duration: 126.609259ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:44:05.059355Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"124.499652ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:44:05.059399Z","caller":"traceutil/trace.go:172","msg":"trace[870144663] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:607; }","duration":"124.554626ms","start":"2025-11-01T09:44:04.934835Z","end":"2025-11-01T09:44:05.059390Z","steps":["trace[870144663] 'agreement among raft nodes before linearized reading'  (duration: 124.476565ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-01T09:44:05.334942Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.413369ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-01T09:44:05.335020Z","caller":"traceutil/trace.go:172","msg":"trace[63103734] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:607; }","duration":"106.533273ms","start":"2025-11-01T09:44:05.228472Z","end":"2025-11-01T09:44:05.335005Z","steps":["trace[63103734] 'range keys from in-memory index tree'  (duration: 106.345215ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-01T09:44:06.030237Z","caller":"traceutil/trace.go:172","msg":"trace[538418333] transaction","detail":"{read_only:false; response_revision:609; number_of_response:1; }","duration":"125.999449ms","start":"2025-11-01T09:44:05.904217Z","end":"2025-11-01T09:44:06.030217Z","steps":["trace[538418333] 'process raft request'  (duration: 125.851371ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:44:36 up  1:26,  0 user,  load average: 8.79, 5.98, 3.53
	Linux embed-certs-214580 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e54db4ff164762c55475c64b60cd58e8006a9d8724b2134ba5420988328409a] <==
	I1101 09:43:46.886286       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:43:46.886585       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1101 09:43:46.886768       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:43:46.886839       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:43:46.886897       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:43:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:43:47.182829       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:43:47.182854       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:43:47.182865       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:43:47.183057       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:43:47.683822       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:43:47.683859       1 metrics.go:72] Registering metrics
	I1101 09:43:47.683977       1 controller.go:711] "Syncing nftables rules"
	I1101 09:43:57.183098       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:43:57.183191       1 main.go:301] handling current node
	I1101 09:44:07.188047       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:44:07.188077       1 main.go:301] handling current node
	I1101 09:44:17.183053       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:44:17.183085       1 main.go:301] handling current node
	I1101 09:44:27.183158       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1101 09:44:27.183198       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e96acc480b4e765646d24acecdd6b0e6543ce1a4ca7a4dfebb2ac4820f369fdc] <==
	I1101 09:43:45.298718       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1101 09:43:45.298754       1 policy_source.go:240] refreshing policies
	I1101 09:43:45.303864       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1101 09:43:45.307587       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:43:45.313987       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:43:45.314051       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:43:45.314073       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:43:45.314082       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:43:45.314090       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:43:45.342818       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:43:45.343274       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:43:45.343467       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1101 09:43:45.344334       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1101 09:43:45.344425       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1101 09:43:45.773710       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:43:45.811103       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:43:45.836987       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:43:45.848587       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:43:45.857552       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:43:45.918185       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.164.56"}
	I1101 09:43:45.932332       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.179.192"}
	I1101 09:43:46.149889       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:43:48.674190       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:43:48.726942       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:43:49.023301       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [900d5eaf90986af4e504a563b9e25cc937211d9280a58157d415269656f12fe8] <==
	I1101 09:43:48.595795       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:43:48.605155       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1101 09:43:48.612438       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:43:48.621324       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:43:48.621360       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:43:48.621379       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:43:48.622327       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1101 09:43:48.622436       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:43:48.622570       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:43:48.622698       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-214580"
	I1101 09:43:48.622752       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:43:48.625610       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1101 09:43:48.626789       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1101 09:43:48.626988       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:43:48.628212       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1101 09:43:48.628241       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:43:48.630833       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:43:48.631018       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1101 09:43:48.631032       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1101 09:43:48.631049       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1101 09:43:48.631514       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:43:48.634116       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:43:48.634575       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1101 09:43:48.641994       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:43:48.648095       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4afe29f878054c6f745c8446b62728a0f47041b20a9aebe50516a89df2ce3ad4] <==
	I1101 09:43:46.699810       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:43:46.778303       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:43:46.878859       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:43:46.878930       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1101 09:43:46.879049       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:43:46.904672       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:43:46.904737       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:43:46.911497       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:43:46.912143       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:43:46.912354       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:46.915554       1 config.go:200] "Starting service config controller"
	I1101 09:43:46.915577       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:43:46.915786       1 config.go:309] "Starting node config controller"
	I1101 09:43:46.915809       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:43:46.915818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:43:46.916219       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:43:46.916240       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:43:46.916259       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:43:46.916263       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:43:47.016021       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:43:47.017219       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:43:47.017245       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [44596abc1851041c6cd33df427646452721a1d34c3147c32241a3f38e3af7c91] <==
	I1101 09:43:43.542625       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:43:45.167982       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:43:45.168190       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:43:45.168211       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:43:45.168222       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:43:45.310042       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:43:45.310078       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:45.317219       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:45.317318       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:45.322139       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:43:45.321505       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:43:45.417749       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:43:46 embed-certs-214580 kubelet[713]: I1101 09:43:46.309864     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/234d7bd6-5336-4ec0-8d37-9e59105a6166-lib-modules\") pod \"kube-proxy-49j45\" (UID: \"234d7bd6-5336-4ec0-8d37-9e59105a6166\") " pod="kube-system/kube-proxy-49j45"
	Nov 01 09:43:47 embed-certs-214580 kubelet[713]: I1101 09:43:47.249358     713 scope.go:117] "RemoveContainer" containerID="604c59ebde7d43eda75c4ad48146bec49639d4733d95f23dc69312c970a4a1bb"
	Nov 01 09:43:49 embed-certs-214580 kubelet[713]: I1101 09:43:49.233484     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8827\" (UniqueName: \"kubernetes.io/projected/f08d1bad-7e3f-401e-b29b-25804d0f1324-kube-api-access-b8827\") pod \"dashboard-metrics-scraper-6ffb444bf9-2vxx5\" (UID: \"f08d1bad-7e3f-401e-b29b-25804d0f1324\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5"
	Nov 01 09:43:49 embed-certs-214580 kubelet[713]: I1101 09:43:49.233572     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a1ea5a6d-90cf-47e8-b721-ea8375535952-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-pcx7c\" (UID: \"a1ea5a6d-90cf-47e8-b721-ea8375535952\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcx7c"
	Nov 01 09:43:49 embed-certs-214580 kubelet[713]: I1101 09:43:49.233608     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f08d1bad-7e3f-401e-b29b-25804d0f1324-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2vxx5\" (UID: \"f08d1bad-7e3f-401e-b29b-25804d0f1324\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5"
	Nov 01 09:43:49 embed-certs-214580 kubelet[713]: I1101 09:43:49.233705     713 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sttnv\" (UniqueName: \"kubernetes.io/projected/a1ea5a6d-90cf-47e8-b721-ea8375535952-kube-api-access-sttnv\") pod \"kubernetes-dashboard-855c9754f9-pcx7c\" (UID: \"a1ea5a6d-90cf-47e8-b721-ea8375535952\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcx7c"
	Nov 01 09:43:55 embed-certs-214580 kubelet[713]: I1101 09:43:55.303515     713 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-pcx7c" podStartSLOduration=1.457659727 podStartE2EDuration="6.303487393s" podCreationTimestamp="2025-11-01 09:43:49 +0000 UTC" firstStartedPulling="2025-11-01 09:43:49.429468005 +0000 UTC m=+7.346463722" lastFinishedPulling="2025-11-01 09:43:54.275295666 +0000 UTC m=+12.192291388" observedRunningTime="2025-11-01 09:43:55.303220789 +0000 UTC m=+13.220216530" watchObservedRunningTime="2025-11-01 09:43:55.303487393 +0000 UTC m=+13.220483132"
	Nov 01 09:43:58 embed-certs-214580 kubelet[713]: I1101 09:43:58.291380     713 scope.go:117] "RemoveContainer" containerID="a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc"
	Nov 01 09:43:59 embed-certs-214580 kubelet[713]: I1101 09:43:59.296785     713 scope.go:117] "RemoveContainer" containerID="a969ab7c72dcf6b7953bee897460120ad5c9e903415180d08edeb3cf7d41a1bc"
	Nov 01 09:43:59 embed-certs-214580 kubelet[713]: I1101 09:43:59.296937     713 scope.go:117] "RemoveContainer" containerID="8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b"
	Nov 01 09:43:59 embed-certs-214580 kubelet[713]: E1101 09:43:59.297169     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2vxx5_kubernetes-dashboard(f08d1bad-7e3f-401e-b29b-25804d0f1324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5" podUID="f08d1bad-7e3f-401e-b29b-25804d0f1324"
	Nov 01 09:44:00 embed-certs-214580 kubelet[713]: I1101 09:44:00.301830     713 scope.go:117] "RemoveContainer" containerID="8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b"
	Nov 01 09:44:00 embed-certs-214580 kubelet[713]: E1101 09:44:00.302064     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2vxx5_kubernetes-dashboard(f08d1bad-7e3f-401e-b29b-25804d0f1324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5" podUID="f08d1bad-7e3f-401e-b29b-25804d0f1324"
	Nov 01 09:44:01 embed-certs-214580 kubelet[713]: I1101 09:44:01.654023     713 scope.go:117] "RemoveContainer" containerID="8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b"
	Nov 01 09:44:01 embed-certs-214580 kubelet[713]: E1101 09:44:01.654360     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2vxx5_kubernetes-dashboard(f08d1bad-7e3f-401e-b29b-25804d0f1324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5" podUID="f08d1bad-7e3f-401e-b29b-25804d0f1324"
	Nov 01 09:44:12 embed-certs-214580 kubelet[713]: I1101 09:44:12.206186     713 scope.go:117] "RemoveContainer" containerID="8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b"
	Nov 01 09:44:12 embed-certs-214580 kubelet[713]: I1101 09:44:12.335643     713 scope.go:117] "RemoveContainer" containerID="8ec2ef5319ca56ad27d8d82c6f8faeaac16243a6336ec5e5f6e002f9347d7b5b"
	Nov 01 09:44:12 embed-certs-214580 kubelet[713]: I1101 09:44:12.335965     713 scope.go:117] "RemoveContainer" containerID="d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e"
	Nov 01 09:44:12 embed-certs-214580 kubelet[713]: E1101 09:44:12.336232     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2vxx5_kubernetes-dashboard(f08d1bad-7e3f-401e-b29b-25804d0f1324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5" podUID="f08d1bad-7e3f-401e-b29b-25804d0f1324"
	Nov 01 09:44:21 embed-certs-214580 kubelet[713]: I1101 09:44:21.653837     713 scope.go:117] "RemoveContainer" containerID="d03cec41bb10f6a7939fe1cfa1a6d8d33475c2dde5c3b005d6399d826ad89d5e"
	Nov 01 09:44:21 embed-certs-214580 kubelet[713]: E1101 09:44:21.654133     713 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2vxx5_kubernetes-dashboard(f08d1bad-7e3f-401e-b29b-25804d0f1324)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2vxx5" podUID="f08d1bad-7e3f-401e-b29b-25804d0f1324"
	Nov 01 09:44:32 embed-certs-214580 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:44:32 embed-certs-214580 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:44:32 embed-certs-214580 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:44:32 embed-certs-214580 systemd[1]: kubelet.service: Consumed 1.801s CPU time.
	
	
	==> kubernetes-dashboard [2c7e75150e82583057ddfb35cc9f50ac38e6bb51044ed6dc95dae3d75032542c] <==
	2025/11/01 09:43:54 Starting overwatch
	2025/11/01 09:43:54 Using namespace: kubernetes-dashboard
	2025/11/01 09:43:54 Using in-cluster config to connect to apiserver
	2025/11/01 09:43:54 Using secret token for csrf signing
	2025/11/01 09:43:54 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:43:54 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:43:54 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:43:54 Generating JWE encryption key
	2025/11/01 09:43:54 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:43:54 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:43:54 Initializing JWE encryption key from synchronized object
	2025/11/01 09:43:54 Creating in-cluster Sidecar client
	2025/11/01 09:43:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:54 Serving insecurely on HTTP port: 9090
	2025/11/01 09:44:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [604c59ebde7d43eda75c4ad48146bec49639d4733d95f23dc69312c970a4a1bb] <==
	I1101 09:43:46.652319       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:43:46.654733       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [993f4e82116419f59854864bc1ee5f0cf6ba6320e0b5115d8a1cf328f72a9405] <==
	W1101 09:44:11.105883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:13.109064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:13.114590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:15.118368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:15.124943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:17.128321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:17.133290       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.136468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.140536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.144440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.149952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.153833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.158217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:25.161189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:25.168162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:27.172542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:27.178104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:29.181434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:29.185768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:31.188487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:31.194187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:33.197315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:33.201732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:35.205503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:35.212465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-214580 -n embed-certs-214580
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-214580 -n embed-certs-214580: exit status 2 (348.056075ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-214580 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1101 09:44:37.309196  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-927869 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-927869 --alsologtostderr -v=1: exit status 80 (2.546561367s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-927869 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:44:37.827634  430761 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:37.827957  430761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:37.827970  430761 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:37.827976  430761 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:37.828217  430761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:37.828490  430761 out.go:368] Setting JSON to false
	I1101 09:44:37.828537  430761 mustload.go:66] Loading cluster: default-k8s-diff-port-927869
	I1101 09:44:37.828886  430761 config.go:182] Loaded profile config "default-k8s-diff-port-927869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:37.829306  430761 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-927869 --format={{.State.Status}}
	I1101 09:44:37.851717  430761 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:44:37.852084  430761 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:37.926452  430761 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-01 09:44:37.913648426 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:37.927527  430761 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-927869 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:44:37.929487  430761 out.go:179] * Pausing node default-k8s-diff-port-927869 ... 
	I1101 09:44:37.930899  430761 host.go:66] Checking if "default-k8s-diff-port-927869" exists ...
	I1101 09:44:37.931421  430761 ssh_runner.go:195] Run: systemctl --version
	I1101 09:44:37.931648  430761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-927869
	I1101 09:44:37.959377  430761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/default-k8s-diff-port-927869/id_rsa Username:docker}
	I1101 09:44:38.066744  430761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:38.094673  430761 pause.go:52] kubelet running: true
	I1101 09:44:38.094735  430761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:38.275283  430761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:38.275360  430761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:38.352978  430761 cri.go:89] found id: "4533552a7388f145ce63bce88a52b1a47182b6698396974b26aaad8808fef05b"
	I1101 09:44:38.353001  430761 cri.go:89] found id: "b983578032159b0b993e42f332b8f6aac4a69719bad30691b19ee0fc856434fa"
	I1101 09:44:38.353005  430761 cri.go:89] found id: "13f76a8260d422550f4a9ef81a0dafd7dcc5c887fcc6889b15c5a06856071a8d"
	I1101 09:44:38.353007  430761 cri.go:89] found id: "0f0dc1f271394e6568c3b71628f04fea797120a85624b9be410424cfd4b1ce27"
	I1101 09:44:38.353010  430761 cri.go:89] found id: "2f0e301a1717f8b28ff95585a1949a7a840d03416646f8941429f60b86feec30"
	I1101 09:44:38.353013  430761 cri.go:89] found id: "b878398c7931594e4cb6c3c4ed4781cb791a1b90248618542f29de81aedad9be"
	I1101 09:44:38.353015  430761 cri.go:89] found id: "ac46fd3af20eb400a0111854bc5d701bce1483809931f7f410906fe4c1c591b7"
	I1101 09:44:38.353018  430761 cri.go:89] found id: "a306bb6e82ea9a3bfdbe69350daead10910af77d87ca4cb0b5eb7021a3fb5b07"
	I1101 09:44:38.353020  430761 cri.go:89] found id: "ddfcd2d2a811ee1271d5babad45f6a9e1ea864dae01cc3517fe4f1fb4e156a62"
	I1101 09:44:38.353025  430761 cri.go:89] found id: "12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c"
	I1101 09:44:38.353027  430761 cri.go:89] found id: "00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562"
	I1101 09:44:38.353029  430761 cri.go:89] found id: ""
	I1101 09:44:38.353065  430761 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:38.365066  430761 retry.go:31] will retry after 314.571008ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:38Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:38.680667  430761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:38.697825  430761 pause.go:52] kubelet running: false
	I1101 09:44:38.697888  430761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:38.900169  430761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:38.900307  430761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:38.984723  430761 cri.go:89] found id: "4533552a7388f145ce63bce88a52b1a47182b6698396974b26aaad8808fef05b"
	I1101 09:44:38.984749  430761 cri.go:89] found id: "b983578032159b0b993e42f332b8f6aac4a69719bad30691b19ee0fc856434fa"
	I1101 09:44:38.984755  430761 cri.go:89] found id: "13f76a8260d422550f4a9ef81a0dafd7dcc5c887fcc6889b15c5a06856071a8d"
	I1101 09:44:38.984760  430761 cri.go:89] found id: "0f0dc1f271394e6568c3b71628f04fea797120a85624b9be410424cfd4b1ce27"
	I1101 09:44:38.984764  430761 cri.go:89] found id: "2f0e301a1717f8b28ff95585a1949a7a840d03416646f8941429f60b86feec30"
	I1101 09:44:38.984769  430761 cri.go:89] found id: "b878398c7931594e4cb6c3c4ed4781cb791a1b90248618542f29de81aedad9be"
	I1101 09:44:38.984773  430761 cri.go:89] found id: "ac46fd3af20eb400a0111854bc5d701bce1483809931f7f410906fe4c1c591b7"
	I1101 09:44:38.984777  430761 cri.go:89] found id: "a306bb6e82ea9a3bfdbe69350daead10910af77d87ca4cb0b5eb7021a3fb5b07"
	I1101 09:44:38.984780  430761 cri.go:89] found id: "ddfcd2d2a811ee1271d5babad45f6a9e1ea864dae01cc3517fe4f1fb4e156a62"
	I1101 09:44:38.984791  430761 cri.go:89] found id: "12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c"
	I1101 09:44:38.984795  430761 cri.go:89] found id: "00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562"
	I1101 09:44:38.984799  430761 cri.go:89] found id: ""
	I1101 09:44:38.984857  430761 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:39.000714  430761 retry.go:31] will retry after 313.177798ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:38Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:39.314081  430761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:39.331013  430761 pause.go:52] kubelet running: false
	I1101 09:44:39.331239  430761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:39.527196  430761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:39.527313  430761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:39.598155  430761 cri.go:89] found id: "4533552a7388f145ce63bce88a52b1a47182b6698396974b26aaad8808fef05b"
	I1101 09:44:39.598181  430761 cri.go:89] found id: "b983578032159b0b993e42f332b8f6aac4a69719bad30691b19ee0fc856434fa"
	I1101 09:44:39.598186  430761 cri.go:89] found id: "13f76a8260d422550f4a9ef81a0dafd7dcc5c887fcc6889b15c5a06856071a8d"
	I1101 09:44:39.598191  430761 cri.go:89] found id: "0f0dc1f271394e6568c3b71628f04fea797120a85624b9be410424cfd4b1ce27"
	I1101 09:44:39.598195  430761 cri.go:89] found id: "2f0e301a1717f8b28ff95585a1949a7a840d03416646f8941429f60b86feec30"
	I1101 09:44:39.598200  430761 cri.go:89] found id: "b878398c7931594e4cb6c3c4ed4781cb791a1b90248618542f29de81aedad9be"
	I1101 09:44:39.598204  430761 cri.go:89] found id: "ac46fd3af20eb400a0111854bc5d701bce1483809931f7f410906fe4c1c591b7"
	I1101 09:44:39.598208  430761 cri.go:89] found id: "a306bb6e82ea9a3bfdbe69350daead10910af77d87ca4cb0b5eb7021a3fb5b07"
	I1101 09:44:39.598212  430761 cri.go:89] found id: "ddfcd2d2a811ee1271d5babad45f6a9e1ea864dae01cc3517fe4f1fb4e156a62"
	I1101 09:44:39.598230  430761 cri.go:89] found id: "12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c"
	I1101 09:44:39.598239  430761 cri.go:89] found id: "00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562"
	I1101 09:44:39.598243  430761 cri.go:89] found id: ""
	I1101 09:44:39.598310  430761 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:39.610719  430761 retry.go:31] will retry after 426.891231ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:39Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:40.038463  430761 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:40.053550  430761 pause.go:52] kubelet running: false
	I1101 09:44:40.053622  430761 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:40.199374  430761 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:40.199456  430761 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:40.273953  430761 cri.go:89] found id: "4533552a7388f145ce63bce88a52b1a47182b6698396974b26aaad8808fef05b"
	I1101 09:44:40.273977  430761 cri.go:89] found id: "b983578032159b0b993e42f332b8f6aac4a69719bad30691b19ee0fc856434fa"
	I1101 09:44:40.273981  430761 cri.go:89] found id: "13f76a8260d422550f4a9ef81a0dafd7dcc5c887fcc6889b15c5a06856071a8d"
	I1101 09:44:40.273984  430761 cri.go:89] found id: "0f0dc1f271394e6568c3b71628f04fea797120a85624b9be410424cfd4b1ce27"
	I1101 09:44:40.273987  430761 cri.go:89] found id: "2f0e301a1717f8b28ff95585a1949a7a840d03416646f8941429f60b86feec30"
	I1101 09:44:40.273991  430761 cri.go:89] found id: "b878398c7931594e4cb6c3c4ed4781cb791a1b90248618542f29de81aedad9be"
	I1101 09:44:40.273993  430761 cri.go:89] found id: "ac46fd3af20eb400a0111854bc5d701bce1483809931f7f410906fe4c1c591b7"
	I1101 09:44:40.273995  430761 cri.go:89] found id: "a306bb6e82ea9a3bfdbe69350daead10910af77d87ca4cb0b5eb7021a3fb5b07"
	I1101 09:44:40.273997  430761 cri.go:89] found id: "ddfcd2d2a811ee1271d5babad45f6a9e1ea864dae01cc3517fe4f1fb4e156a62"
	I1101 09:44:40.274003  430761 cri.go:89] found id: "12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c"
	I1101 09:44:40.274005  430761 cri.go:89] found id: "00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562"
	I1101 09:44:40.274007  430761 cri.go:89] found id: ""
	I1101 09:44:40.274043  430761 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:40.288060  430761 out.go:203] 
	W1101 09:44:40.289665  430761 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:44:40.289692  430761 out.go:285] * 
	* 
	W1101 09:44:40.293920  430761 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:44:40.295322  430761 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-927869 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-927869
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-927869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c",
	        "Created": "2025-11-01T09:42:32.791979804Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 416101,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:43:35.912351262Z",
	            "FinishedAt": "2025-11-01T09:43:34.889182486Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/hostname",
	        "HostsPath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/hosts",
	        "LogPath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c-json.log",
	        "Name": "/default-k8s-diff-port-927869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-927869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-927869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c",
	                "LowerDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-927869",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-927869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-927869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-927869",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-927869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3c6f630d0501662dfe22c9f5cf2a8d627f70ca1cefd6ac1268ecc7efe300418a",
	            "SandboxKey": "/var/run/docker/netns/3c6f630d0501",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-927869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:9b:11:9c:32:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5df57938ba0e2329abf459496ea29ebdbd8c04ec3a35e78ed455192e01829fff",
	                    "EndpointID": "8ea16259a3f4a0f4aa147d52ed7a642ab1be9518044fcb507e92f1b15506a61a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-927869",
	                        "08e9b30a8fc0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869: exit status 2 (353.267205ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-927869 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-927869 logs -n 25: (1.080347888s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-224845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-214580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ old-k8s-version-106430 image list --format=json                                                                                                                                                                                               │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ pause   │ -p old-k8s-version-106430 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ no-preload-224845 image list --format=json                                                                                                                                                                                                    │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p no-preload-224845 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-722387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ stop    │ -p newest-cni-722387 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ embed-certs-214580 image list --format=json                                                                                                                                                                                                   │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p embed-certs-214580 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p embed-certs-214580                                                                                                                                                                                                                         │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ default-k8s-diff-port-927869 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p default-k8s-diff-port-927869 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-722387 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p embed-certs-214580                                                                                                                                                                                                                         │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:44:38
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:44:38.523287  431145 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:38.523564  431145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:38.523574  431145 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:38.523578  431145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:38.523800  431145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:38.524267  431145 out.go:368] Setting JSON to false
	I1101 09:44:38.525629  431145 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5217,"bootTime":1761985062,"procs":437,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:44:38.525727  431145 start.go:143] virtualization: kvm guest
	I1101 09:44:38.527859  431145 out.go:179] * [newest-cni-722387] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:44:38.529045  431145 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:44:38.529114  431145 notify.go:221] Checking for updates...
	I1101 09:44:38.531328  431145 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:44:38.533047  431145 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:38.534417  431145 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:44:38.535653  431145 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:44:38.537039  431145 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:44:38.538738  431145 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:38.539220  431145 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:44:38.565195  431145 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:44:38.565381  431145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:38.631657  431145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-01 09:44:38.621287477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:38.631767  431145 docker.go:319] overlay module found
	I1101 09:44:38.633719  431145 out.go:179] * Using the docker driver based on existing profile
	I1101 09:44:38.635183  431145 start.go:309] selected driver: docker
	I1101 09:44:38.635201  431145 start.go:930] validating driver "docker" against &{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:38.635281  431145 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:44:38.635786  431145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:38.703758  431145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-01 09:44:38.691419349 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:38.704058  431145 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:38.704091  431145 cni.go:84] Creating CNI manager for ""
	I1101 09:44:38.704133  431145 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:38.704164  431145 start.go:353] cluster config:
	{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:38.706149  431145 out.go:179] * Starting "newest-cni-722387" primary control-plane node in "newest-cni-722387" cluster
	I1101 09:44:38.707131  431145 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:44:38.708418  431145 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:44:38.709522  431145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:38.709565  431145 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:44:38.709574  431145 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:44:38.709686  431145 cache.go:59] Caching tarball of preloaded images
	I1101 09:44:38.709764  431145 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:44:38.709778  431145 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:44:38.709898  431145 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:38.731867  431145 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:44:38.731884  431145 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:44:38.731903  431145 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:44:38.731970  431145 start.go:360] acquireMachinesLock for newest-cni-722387: {Name:mk940a2cf467ead4a4947b13278d9e50da243cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:44:38.732043  431145 start.go:364] duration metric: took 47.245µs to acquireMachinesLock for "newest-cni-722387"
	I1101 09:44:38.732065  431145 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:44:38.732073  431145 fix.go:54] fixHost starting: 
	I1101 09:44:38.732264  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:38.758201  431145 fix.go:112] recreateIfNeeded on newest-cni-722387: state=Stopped err=<nil>
	W1101 09:44:38.758255  431145 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:43:58 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:58.079834403Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:43:58 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:58.084738543Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:43:58 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:58.084774852Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.524842979Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=a7f537b1-08b1-436d-8bd9-86a5dc8820dc name=/runtime.v1.ImageService/PullImage
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.52566589Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b03a014e-0092-4136-abc0-c1b5f3fe6e18 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.527544079Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=da5405e1-dcba-4545-ad6e-5192d306a695 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.532957267Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlr8h/kubernetes-dashboard" id=0fa162f6-8b1f-4288-a68d-38ab496228a3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.533094849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.538254326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.538499108Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ff6fa6e68465b9a2a6bb65ef653ebb1c214c0100740dc04778774090d8d490b8/merged/etc/group: no such file or directory"
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.538949767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.585540023Z" level=info msg="Created container 00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlr8h/kubernetes-dashboard" id=0fa162f6-8b1f-4288-a68d-38ab496228a3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.586308586Z" level=info msg="Starting container: 00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562" id=1ef0794e-68dd-435d-9062-c3cd7faec2d2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.588453073Z" level=info msg="Started container" PID=1736 containerID=00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlr8h/kubernetes-dashboard id=1ef0794e-68dd-435d-9062-c3cd7faec2d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=29c1e490eddf9c3ad06f99fa423ee5425681e9ddc7e71d29422b83c3551be7c2
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.010172313Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c6e9e4a3-c12d-4ce4-a9bf-9ddb595e87a3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.014009198Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1b72f4d3-f1cf-4b4b-b8ce-5c42ad029029 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.017280222Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b/dashboard-metrics-scraper" id=3f60b0de-79b0-4026-94cd-d69e395d050a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.017432074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.024905735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.025532131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.088419616Z" level=info msg="Created container 12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b/dashboard-metrics-scraper" id=3f60b0de-79b0-4026-94cd-d69e395d050a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.089321184Z" level=info msg="Starting container: 12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c" id=dfc0642d-184f-40fd-b3eb-3277a7e85032 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.091575561Z" level=info msg="Started container" PID=1756 containerID=12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b/dashboard-metrics-scraper id=dfc0642d-184f-40fd-b3eb-3277a7e85032 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f89b722ccba1a0c485b0c4d706c5a17e92870b7ed606380a488357bcbc558164
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.149157725Z" level=info msg="Removing container: 2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7" id=49a04eaf-a5b6-4358-b55d-ea9b7d0048a0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.162099117Z" level=info msg="Removed container 2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b/dashboard-metrics-scraper" id=49a04eaf-a5b6-4358-b55d-ea9b7d0048a0 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	12a0df8efed6f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   f89b722ccba1a       dashboard-metrics-scraper-6ffb444bf9-vls9b             kubernetes-dashboard
	00b75599b3e3d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   29c1e490eddf9       kubernetes-dashboard-855c9754f9-rlr8h                  kubernetes-dashboard
	4533552a7388f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Running             storage-provisioner         1                   8467dde2f9395       storage-provisioner                                    kube-system
	b983578032159       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   94ce95f8b4abe       coredns-66bc5c9577-mlk9t                               kube-system
	cabd4691ada29       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   b03c48b19b878       busybox                                                default
	13f76a8260d42       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   8d9555891d55f       kube-proxy-dszvg                                       kube-system
	0f0dc1f271394       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   605b3952cf6af       kindnet-g9zdl                                          kube-system
	2f0e301a1717f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   8467dde2f9395       storage-provisioner                                    kube-system
	b878398c79315       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   5714805cc2f2c       kube-apiserver-default-k8s-diff-port-927869            kube-system
	ac46fd3af20eb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   d2ba3e9942c82       etcd-default-k8s-diff-port-927869                      kube-system
	a306bb6e82ea9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   8d78b3445565a       kube-controller-manager-default-k8s-diff-port-927869   kube-system
	ddfcd2d2a811e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   40f936cbb9ab5       kube-scheduler-default-k8s-diff-port-927869            kube-system
	
	
	==> coredns [b983578032159b0b993e42f332b8f6aac4a69719bad30691b19ee0fc856434fa] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56663 - 21212 "HINFO IN 543825508144934640.6642107499073695754. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.022411918s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-927869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-927869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=default-k8s-diff-port-927869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_42_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:42:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-927869
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:44:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:43:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-927869
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f6bc8c84-79e6-433c-bb02-212f45767f33
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-mlk9t                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-927869                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-g9zdl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-927869             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-927869    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-dszvg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-927869             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vls9b              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rlr8h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 118s)  kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    113s                 kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  113s                 kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     113s                 kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node default-k8s-diff-port-927869 event: Registered Node default-k8s-diff-port-927869 in Controller
	  Normal  NodeReady                97s                  kubelet          Node default-k8s-diff-port-927869 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 59s)    kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 59s)    kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 59s)    kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                  node-controller  Node default-k8s-diff-port-927869 event: Registered Node default-k8s-diff-port-927869 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [ac46fd3af20eb400a0111854bc5d701bce1483809931f7f410906fe4c1c591b7] <==
	{"level":"warn","ts":"2025-11-01T09:43:45.153410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.174985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.191629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.227808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.268997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.278147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.295667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.317464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.331410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.342075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.363986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.371346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.388420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.405280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.417148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.441479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.451575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.462808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.474200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.482779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.503226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.511991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.527007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.604513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48452","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:43:56.302002Z","caller":"traceutil/trace.go:172","msg":"trace[1355052739] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"123.17963ms","start":"2025-11-01T09:43:56.178799Z","end":"2025-11-01T09:43:56.301979Z","steps":["trace[1355052739] 'process raft request'  (duration: 66.400569ms)","trace[1355052739] 'compare'  (duration: 56.63456ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:44:41 up  1:26,  0 user,  load average: 8.57, 5.98, 3.54
	Linux default-k8s-diff-port-927869 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0f0dc1f271394e6568c3b71628f04fea797120a85624b9be410424cfd4b1ce27] <==
	I1101 09:43:47.757071       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:43:47.757351       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:43:47.757627       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:43:47.757649       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:43:47.757683       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:43:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:43:48.152842       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:43:48.152881       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:43:48.152934       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:43:48.153268       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:43:48.353476       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:43:48.353878       1 metrics.go:72] Registering metrics
	I1101 09:43:48.354037       1 controller.go:711] "Syncing nftables rules"
	I1101 09:43:58.061561       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:43:58.061622       1 main.go:301] handling current node
	I1101 09:44:08.067094       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:44:08.067153       1 main.go:301] handling current node
	I1101 09:44:18.061831       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:44:18.061886       1 main.go:301] handling current node
	I1101 09:44:28.061126       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:44:28.061163       1 main.go:301] handling current node
	I1101 09:44:38.068958       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:44:38.068990       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b878398c7931594e4cb6c3c4ed4781cb791a1b90248618542f29de81aedad9be] <==
	I1101 09:43:46.224588       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:43:46.225118       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:43:46.225177       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:43:46.225124       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:43:46.229420       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:43:46.235041       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:43:46.235265       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:43:46.235304       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:43:46.235312       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:43:46.235319       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:43:46.235326       1 cache.go:39] Caches are synced for autoregister controller
	E1101 09:43:46.238705       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:43:46.267960       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:43:46.668701       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:43:46.719817       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:43:46.745656       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:43:46.755337       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:43:46.766287       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:43:46.815256       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.46.159"}
	I1101 09:43:46.829551       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.195.199"}
	I1101 09:43:47.127877       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:43:49.595396       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:43:49.942190       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:43:49.942191       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:43:50.144844       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a306bb6e82ea9a3bfdbe69350daead10910af77d87ca4cb0b5eb7021a3fb5b07] <==
	I1101 09:43:49.579657       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:43:49.584813       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:43:49.587148       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:43:49.589419       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:43:49.589453       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:43:49.589585       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:43:49.589610       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:43:49.589649       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:43:49.589707       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:43:49.589964       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:43:49.589999       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:43:49.590003       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:43:49.591260       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:43:49.592440       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:43:49.592563       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:43:49.592709       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-927869"
	I1101 09:43:49.592778       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:43:49.594995       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:43:49.595012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:43:49.595015       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:43:49.595036       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:43:49.596334       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:43:49.598607       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:43:49.600478       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:43:49.616351       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [13f76a8260d422550f4a9ef81a0dafd7dcc5c887fcc6889b15c5a06856071a8d] <==
	I1101 09:43:47.575083       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:43:47.638214       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:43:47.738508       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:43:47.738575       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:43:47.738658       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:43:47.764150       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:43:47.764220       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:43:47.771476       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:43:47.771869       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:43:47.771893       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:47.773590       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:43:47.773611       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:43:47.773641       1 config.go:200] "Starting service config controller"
	I1101 09:43:47.773646       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:43:47.773658       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:43:47.773663       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:43:47.773876       1 config.go:309] "Starting node config controller"
	I1101 09:43:47.773888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:43:47.773906       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:43:47.874660       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:43:47.874690       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:43:47.874665       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ddfcd2d2a811ee1271d5babad45f6a9e1ea864dae01cc3517fe4f1fb4e156a62] <==
	I1101 09:43:44.303730       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:43:46.159675       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:43:46.159724       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:43:46.159736       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:43:46.159746       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:43:46.211202       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:43:46.211239       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:46.216425       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:43:46.216639       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:46.216658       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:46.216679       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:43:46.319291       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:43:50 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:50.298875     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45f45\" (UniqueName: \"kubernetes.io/projected/7cd79d09-83e8-4da1-b944-eb989ee2e25d-kube-api-access-45f45\") pod \"dashboard-metrics-scraper-6ffb444bf9-vls9b\" (UID: \"7cd79d09-83e8-4da1-b944-eb989ee2e25d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b"
	Nov 01 09:43:50 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:50.298963     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7cd79d09-83e8-4da1-b944-eb989ee2e25d-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vls9b\" (UID: \"7cd79d09-83e8-4da1-b944-eb989ee2e25d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b"
	Nov 01 09:43:50 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:50.299044     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c9986c15-8b9c-4a12-9e39-60df5c19b4c5-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rlr8h\" (UID: \"c9986c15-8b9c-4a12-9e39-60df5c19b4c5\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlr8h"
	Nov 01 09:43:54 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:54.089626     721 scope.go:117] "RemoveContainer" containerID="8b63d7f08dfa7b7e87c73f2ee24d2b595a205d06a13f2af14418dab8d8ab8592"
	Nov 01 09:43:54 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:54.347336     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:43:55 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:55.095960     721 scope.go:117] "RemoveContainer" containerID="8b63d7f08dfa7b7e87c73f2ee24d2b595a205d06a13f2af14418dab8d8ab8592"
	Nov 01 09:43:55 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:55.096246     721 scope.go:117] "RemoveContainer" containerID="2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7"
	Nov 01 09:43:55 default-k8s-diff-port-927869 kubelet[721]: E1101 09:43:55.096425     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:43:56 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:56.100746     721 scope.go:117] "RemoveContainer" containerID="2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7"
	Nov 01 09:43:56 default-k8s-diff-port-927869 kubelet[721]: E1101 09:43:56.101021     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:43:57 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:57.103779     721 scope.go:117] "RemoveContainer" containerID="2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7"
	Nov 01 09:43:57 default-k8s-diff-port-927869 kubelet[721]: E1101 09:43:57.104065     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:44:00 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:00.428810     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlr8h" podStartSLOduration=1.460563016 podStartE2EDuration="10.428785544s" podCreationTimestamp="2025-11-01 09:43:50 +0000 UTC" firstStartedPulling="2025-11-01 09:43:50.558553917 +0000 UTC m=+7.687007935" lastFinishedPulling="2025-11-01 09:43:59.526776437 +0000 UTC m=+16.655230463" observedRunningTime="2025-11-01 09:44:00.13099709 +0000 UTC m=+17.259451116" watchObservedRunningTime="2025-11-01 09:44:00.428785544 +0000 UTC m=+17.557239572"
	Nov 01 09:44:11 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:11.009576     721 scope.go:117] "RemoveContainer" containerID="2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7"
	Nov 01 09:44:11 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:11.147768     721 scope.go:117] "RemoveContainer" containerID="2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7"
	Nov 01 09:44:11 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:11.147995     721 scope.go:117] "RemoveContainer" containerID="12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c"
	Nov 01 09:44:11 default-k8s-diff-port-927869 kubelet[721]: E1101 09:44:11.148228     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:44:16 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:16.299065     721 scope.go:117] "RemoveContainer" containerID="12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c"
	Nov 01 09:44:16 default-k8s-diff-port-927869 kubelet[721]: E1101 09:44:16.299255     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:44:27 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:27.009205     721 scope.go:117] "RemoveContainer" containerID="12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c"
	Nov 01 09:44:27 default-k8s-diff-port-927869 kubelet[721]: E1101 09:44:27.010024     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:44:38 default-k8s-diff-port-927869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:44:38 default-k8s-diff-port-927869 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:44:38 default-k8s-diff-port-927869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:44:38 default-k8s-diff-port-927869 systemd[1]: kubelet.service: Consumed 1.957s CPU time.
	
	
	==> kubernetes-dashboard [00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562] <==
	2025/11/01 09:43:59 Using namespace: kubernetes-dashboard
	2025/11/01 09:43:59 Using in-cluster config to connect to apiserver
	2025/11/01 09:43:59 Using secret token for csrf signing
	2025/11/01 09:43:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:43:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:43:59 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:43:59 Generating JWE encryption key
	2025/11/01 09:43:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:43:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:43:59 Initializing JWE encryption key from synchronized object
	2025/11/01 09:43:59 Creating in-cluster Sidecar client
	2025/11/01 09:43:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:59 Serving insecurely on HTTP port: 9090
	2025/11/01 09:44:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:59 Starting overwatch
	
	
	==> storage-provisioner [2f0e301a1717f8b28ff95585a1949a7a840d03416646f8941429f60b86feec30] <==
	I1101 09:43:47.456014       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:43:47.465665       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [4533552a7388f145ce63bce88a52b1a47182b6698396974b26aaad8808fef05b] <==
	W1101 09:44:15.721104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:17.724790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:17.731646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.735693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.740889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.744702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.752710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.756737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.762024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:25.764803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:25.769327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:27.772891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:27.777364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:29.781018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:29.786656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:31.789855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:31.794999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:33.799344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:33.803780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:35.806482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:35.811772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:37.815251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:37.825955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:39.829826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:39.834027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869
E1101 09:44:42.054162  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869: exit status 2 (338.777136ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-927869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-927869
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-927869:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c",
	        "Created": "2025-11-01T09:42:32.791979804Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 416101,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:43:35.912351262Z",
	            "FinishedAt": "2025-11-01T09:43:34.889182486Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/hostname",
	        "HostsPath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/hosts",
	        "LogPath": "/var/lib/docker/containers/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c/08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c-json.log",
	        "Name": "/default-k8s-diff-port-927869",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-927869:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-927869",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "08e9b30a8fc007197edfa2125435335fa9ac17fa855ec0ffa846b4b606993f3c",
	                "LowerDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3544594f12e13aadd221ff6b7ec8dec2829b1cf791a46152da64f0e7b407f995/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-927869",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-927869/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-927869",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-927869",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-927869",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3c6f630d0501662dfe22c9f5cf2a8d627f70ca1cefd6ac1268ecc7efe300418a",
	            "SandboxKey": "/var/run/docker/netns/3c6f630d0501",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-927869": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:9b:11:9c:32:a6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5df57938ba0e2329abf459496ea29ebdbd8c04ec3a35e78ed455192e01829fff",
	                    "EndpointID": "8ea16259a3f4a0f4aa147d52ed7a642ab1be9518044fcb507e92f1b15506a61a",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-927869",
	                        "08e9b30a8fc0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869: exit status 2 (354.171631ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-927869 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-927869 logs -n 25: (1.149822638s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-224845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ addons  │ enable dashboard -p embed-certs-214580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ old-k8s-version-106430 image list --format=json                                                                                                                                                                                               │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ pause   │ -p old-k8s-version-106430 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ no-preload-224845 image list --format=json                                                                                                                                                                                                    │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p no-preload-224845 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-722387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ stop    │ -p newest-cni-722387 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ embed-certs-214580 image list --format=json                                                                                                                                                                                                   │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p embed-certs-214580 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p embed-certs-214580                                                                                                                                                                                                                         │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ default-k8s-diff-port-927869 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p default-k8s-diff-port-927869 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-722387 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p embed-certs-214580                                                                                                                                                                                                                         │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:44:38
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:44:38.523287  431145 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:38.523564  431145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:38.523574  431145 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:38.523578  431145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:38.523800  431145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:38.524267  431145 out.go:368] Setting JSON to false
	I1101 09:44:38.525629  431145 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5217,"bootTime":1761985062,"procs":437,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:44:38.525727  431145 start.go:143] virtualization: kvm guest
	I1101 09:44:38.527859  431145 out.go:179] * [newest-cni-722387] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:44:38.529045  431145 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:44:38.529114  431145 notify.go:221] Checking for updates...
	I1101 09:44:38.531328  431145 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:44:38.533047  431145 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:38.534417  431145 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:44:38.535653  431145 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:44:38.537039  431145 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:44:38.538738  431145 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:38.539220  431145 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:44:38.565195  431145 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:44:38.565381  431145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:38.631657  431145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-01 09:44:38.621287477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:38.631767  431145 docker.go:319] overlay module found
	I1101 09:44:38.633719  431145 out.go:179] * Using the docker driver based on existing profile
	I1101 09:44:38.635183  431145 start.go:309] selected driver: docker
	I1101 09:44:38.635201  431145 start.go:930] validating driver "docker" against &{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:38.635281  431145 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:44:38.635786  431145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:38.703758  431145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-01 09:44:38.691419349 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:38.704058  431145 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:38.704091  431145 cni.go:84] Creating CNI manager for ""
	I1101 09:44:38.704133  431145 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:38.704164  431145 start.go:353] cluster config:
	{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:38.706149  431145 out.go:179] * Starting "newest-cni-722387" primary control-plane node in "newest-cni-722387" cluster
	I1101 09:44:38.707131  431145 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:44:38.708418  431145 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:44:38.709522  431145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:38.709565  431145 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:44:38.709574  431145 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:44:38.709686  431145 cache.go:59] Caching tarball of preloaded images
	I1101 09:44:38.709764  431145 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:44:38.709778  431145 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:44:38.709898  431145 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:38.731867  431145 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:44:38.731884  431145 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:44:38.731903  431145 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:44:38.731970  431145 start.go:360] acquireMachinesLock for newest-cni-722387: {Name:mk940a2cf467ead4a4947b13278d9e50da243cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:44:38.732043  431145 start.go:364] duration metric: took 47.245µs to acquireMachinesLock for "newest-cni-722387"
	I1101 09:44:38.732065  431145 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:44:38.732073  431145 fix.go:54] fixHost starting: 
	I1101 09:44:38.732264  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:38.758201  431145 fix.go:112] recreateIfNeeded on newest-cni-722387: state=Stopped err=<nil>
	W1101 09:44:38.758255  431145 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 01 09:43:58 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:58.079834403Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 01 09:43:58 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:58.084738543Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 01 09:43:58 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:58.084774852Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.524842979Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=a7f537b1-08b1-436d-8bd9-86a5dc8820dc name=/runtime.v1.ImageService/PullImage
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.52566589Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=b03a014e-0092-4136-abc0-c1b5f3fe6e18 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.527544079Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=da5405e1-dcba-4545-ad6e-5192d306a695 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.532957267Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlr8h/kubernetes-dashboard" id=0fa162f6-8b1f-4288-a68d-38ab496228a3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.533094849Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.538254326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.538499108Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ff6fa6e68465b9a2a6bb65ef653ebb1c214c0100740dc04778774090d8d490b8/merged/etc/group: no such file or directory"
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.538949767Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.585540023Z" level=info msg="Created container 00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlr8h/kubernetes-dashboard" id=0fa162f6-8b1f-4288-a68d-38ab496228a3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.586308586Z" level=info msg="Starting container: 00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562" id=1ef0794e-68dd-435d-9062-c3cd7faec2d2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:43:59 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:43:59.588453073Z" level=info msg="Started container" PID=1736 containerID=00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlr8h/kubernetes-dashboard id=1ef0794e-68dd-435d-9062-c3cd7faec2d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=29c1e490eddf9c3ad06f99fa423ee5425681e9ddc7e71d29422b83c3551be7c2
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.010172313Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c6e9e4a3-c12d-4ce4-a9bf-9ddb595e87a3 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.014009198Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=1b72f4d3-f1cf-4b4b-b8ce-5c42ad029029 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.017280222Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b/dashboard-metrics-scraper" id=3f60b0de-79b0-4026-94cd-d69e395d050a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.017432074Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.024905735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.025532131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.088419616Z" level=info msg="Created container 12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b/dashboard-metrics-scraper" id=3f60b0de-79b0-4026-94cd-d69e395d050a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.089321184Z" level=info msg="Starting container: 12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c" id=dfc0642d-184f-40fd-b3eb-3277a7e85032 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.091575561Z" level=info msg="Started container" PID=1756 containerID=12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b/dashboard-metrics-scraper id=dfc0642d-184f-40fd-b3eb-3277a7e85032 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f89b722ccba1a0c485b0c4d706c5a17e92870b7ed606380a488357bcbc558164
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.149157725Z" level=info msg="Removing container: 2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7" id=49a04eaf-a5b6-4358-b55d-ea9b7d0048a0 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 01 09:44:11 default-k8s-diff-port-927869 crio[564]: time="2025-11-01T09:44:11.162099117Z" level=info msg="Removed container 2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b/dashboard-metrics-scraper" id=49a04eaf-a5b6-4358-b55d-ea9b7d0048a0 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	12a0df8efed6f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago      Exited              dashboard-metrics-scraper   2                   f89b722ccba1a       dashboard-metrics-scraper-6ffb444bf9-vls9b             kubernetes-dashboard
	00b75599b3e3d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   29c1e490eddf9       kubernetes-dashboard-855c9754f9-rlr8h                  kubernetes-dashboard
	4533552a7388f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Running             storage-provisioner         1                   8467dde2f9395       storage-provisioner                                    kube-system
	b983578032159       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   94ce95f8b4abe       coredns-66bc5c9577-mlk9t                               kube-system
	cabd4691ada29       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   b03c48b19b878       busybox                                                default
	13f76a8260d42       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   8d9555891d55f       kube-proxy-dszvg                                       kube-system
	0f0dc1f271394       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   605b3952cf6af       kindnet-g9zdl                                          kube-system
	2f0e301a1717f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   8467dde2f9395       storage-provisioner                                    kube-system
	b878398c79315       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   5714805cc2f2c       kube-apiserver-default-k8s-diff-port-927869            kube-system
	ac46fd3af20eb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   d2ba3e9942c82       etcd-default-k8s-diff-port-927869                      kube-system
	a306bb6e82ea9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   8d78b3445565a       kube-controller-manager-default-k8s-diff-port-927869   kube-system
	ddfcd2d2a811e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   40f936cbb9ab5       kube-scheduler-default-k8s-diff-port-927869            kube-system
	
	
	==> coredns [b983578032159b0b993e42f332b8f6aac4a69719bad30691b19ee0fc856434fa] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56663 - 21212 "HINFO IN 543825508144934640.6642107499073695754. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.022411918s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-927869
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-927869
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=default-k8s-diff-port-927869
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_42_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:42:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-927869
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:44:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:42:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 01 Nov 2025 09:44:26 +0000   Sat, 01 Nov 2025 09:43:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-927869
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f6bc8c84-79e6-433c-bb02-212f45767f33
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-mlk9t                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-927869                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-g9zdl                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-927869             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-927869    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-dszvg                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-927869             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vls9b              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rlr8h                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-927869 event: Registered Node default-k8s-diff-port-927869 in Controller
	  Normal  NodeReady                99s                kubelet          Node default-k8s-diff-port-927869 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 61s)  kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 61s)  kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 61s)  kubelet          Node default-k8s-diff-port-927869 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-927869 event: Registered Node default-k8s-diff-port-927869 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [ac46fd3af20eb400a0111854bc5d701bce1483809931f7f410906fe4c1c591b7] <==
	{"level":"warn","ts":"2025-11-01T09:43:45.153410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.174985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.191629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.227808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.268997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.278147Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.295667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.317464Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.331410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.342075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.363986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.371346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.388420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.405280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.417148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.441479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.451575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.462808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.474200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.482779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.503226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.511991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.527007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:43:45.604513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48452","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-01T09:43:56.302002Z","caller":"traceutil/trace.go:172","msg":"trace[1355052739] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"123.17963ms","start":"2025-11-01T09:43:56.178799Z","end":"2025-11-01T09:43:56.301979Z","steps":["trace[1355052739] 'process raft request'  (duration: 66.400569ms)","trace[1355052739] 'compare'  (duration: 56.63456ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:44:43 up  1:27,  0 user,  load average: 8.57, 5.98, 3.54
	Linux default-k8s-diff-port-927869 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0f0dc1f271394e6568c3b71628f04fea797120a85624b9be410424cfd4b1ce27] <==
	I1101 09:43:47.757071       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:43:47.757351       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1101 09:43:47.757627       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:43:47.757649       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:43:47.757683       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:43:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:43:48.152842       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:43:48.152881       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:43:48.152934       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:43:48.153268       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:43:48.353476       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:43:48.353878       1 metrics.go:72] Registering metrics
	I1101 09:43:48.354037       1 controller.go:711] "Syncing nftables rules"
	I1101 09:43:58.061561       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:43:58.061622       1 main.go:301] handling current node
	I1101 09:44:08.067094       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:44:08.067153       1 main.go:301] handling current node
	I1101 09:44:18.061831       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:44:18.061886       1 main.go:301] handling current node
	I1101 09:44:28.061126       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:44:28.061163       1 main.go:301] handling current node
	I1101 09:44:38.068958       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1101 09:44:38.068990       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b878398c7931594e4cb6c3c4ed4781cb791a1b90248618542f29de81aedad9be] <==
	I1101 09:43:46.224588       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1101 09:43:46.225118       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:43:46.225177       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:43:46.225124       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1101 09:43:46.229420       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:43:46.235041       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:43:46.235265       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1101 09:43:46.235304       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:43:46.235312       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:43:46.235319       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:43:46.235326       1 cache.go:39] Caches are synced for autoregister controller
	E1101 09:43:46.238705       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1101 09:43:46.267960       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:43:46.668701       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:43:46.719817       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:43:46.745656       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:43:46.755337       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:43:46.766287       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:43:46.815256       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.46.159"}
	I1101 09:43:46.829551       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.195.199"}
	I1101 09:43:47.127877       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:43:49.595396       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:43:49.942190       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:43:49.942191       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:43:50.144844       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [a306bb6e82ea9a3bfdbe69350daead10910af77d87ca4cb0b5eb7021a3fb5b07] <==
	I1101 09:43:49.579657       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:43:49.584813       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1101 09:43:49.587148       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:43:49.589419       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:43:49.589453       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:43:49.589585       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:43:49.589610       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:43:49.589649       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1101 09:43:49.589707       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1101 09:43:49.589964       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:43:49.589999       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1101 09:43:49.590003       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:43:49.591260       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:43:49.592440       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1101 09:43:49.592563       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1101 09:43:49.592709       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-927869"
	I1101 09:43:49.592778       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1101 09:43:49.594995       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:43:49.595012       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1101 09:43:49.595015       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1101 09:43:49.595036       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1101 09:43:49.596334       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1101 09:43:49.598607       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:43:49.600478       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:43:49.616351       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [13f76a8260d422550f4a9ef81a0dafd7dcc5c887fcc6889b15c5a06856071a8d] <==
	I1101 09:43:47.575083       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:43:47.638214       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:43:47.738508       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:43:47.738575       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1101 09:43:47.738658       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:43:47.764150       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:43:47.764220       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:43:47.771476       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:43:47.771869       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:43:47.771893       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:47.773590       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:43:47.773611       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:43:47.773641       1 config.go:200] "Starting service config controller"
	I1101 09:43:47.773646       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:43:47.773658       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:43:47.773663       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:43:47.773876       1 config.go:309] "Starting node config controller"
	I1101 09:43:47.773888       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:43:47.773906       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:43:47.874660       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1101 09:43:47.874690       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:43:47.874665       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ddfcd2d2a811ee1271d5babad45f6a9e1ea864dae01cc3517fe4f1fb4e156a62] <==
	I1101 09:43:44.303730       1 serving.go:386] Generated self-signed cert in-memory
	W1101 09:43:46.159675       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1101 09:43:46.159724       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1101 09:43:46.159736       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1101 09:43:46.159746       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1101 09:43:46.211202       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:43:46.211239       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:43:46.216425       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:43:46.216639       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:46.216658       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:43:46.216679       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:43:46.319291       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:43:50 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:50.298875     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45f45\" (UniqueName: \"kubernetes.io/projected/7cd79d09-83e8-4da1-b944-eb989ee2e25d-kube-api-access-45f45\") pod \"dashboard-metrics-scraper-6ffb444bf9-vls9b\" (UID: \"7cd79d09-83e8-4da1-b944-eb989ee2e25d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b"
	Nov 01 09:43:50 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:50.298963     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7cd79d09-83e8-4da1-b944-eb989ee2e25d-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vls9b\" (UID: \"7cd79d09-83e8-4da1-b944-eb989ee2e25d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b"
	Nov 01 09:43:50 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:50.299044     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c9986c15-8b9c-4a12-9e39-60df5c19b4c5-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rlr8h\" (UID: \"c9986c15-8b9c-4a12-9e39-60df5c19b4c5\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlr8h"
	Nov 01 09:43:54 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:54.089626     721 scope.go:117] "RemoveContainer" containerID="8b63d7f08dfa7b7e87c73f2ee24d2b595a205d06a13f2af14418dab8d8ab8592"
	Nov 01 09:43:54 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:54.347336     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 01 09:43:55 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:55.095960     721 scope.go:117] "RemoveContainer" containerID="8b63d7f08dfa7b7e87c73f2ee24d2b595a205d06a13f2af14418dab8d8ab8592"
	Nov 01 09:43:55 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:55.096246     721 scope.go:117] "RemoveContainer" containerID="2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7"
	Nov 01 09:43:55 default-k8s-diff-port-927869 kubelet[721]: E1101 09:43:55.096425     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:43:56 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:56.100746     721 scope.go:117] "RemoveContainer" containerID="2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7"
	Nov 01 09:43:56 default-k8s-diff-port-927869 kubelet[721]: E1101 09:43:56.101021     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:43:57 default-k8s-diff-port-927869 kubelet[721]: I1101 09:43:57.103779     721 scope.go:117] "RemoveContainer" containerID="2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7"
	Nov 01 09:43:57 default-k8s-diff-port-927869 kubelet[721]: E1101 09:43:57.104065     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:44:00 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:00.428810     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rlr8h" podStartSLOduration=1.460563016 podStartE2EDuration="10.428785544s" podCreationTimestamp="2025-11-01 09:43:50 +0000 UTC" firstStartedPulling="2025-11-01 09:43:50.558553917 +0000 UTC m=+7.687007935" lastFinishedPulling="2025-11-01 09:43:59.526776437 +0000 UTC m=+16.655230463" observedRunningTime="2025-11-01 09:44:00.13099709 +0000 UTC m=+17.259451116" watchObservedRunningTime="2025-11-01 09:44:00.428785544 +0000 UTC m=+17.557239572"
	Nov 01 09:44:11 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:11.009576     721 scope.go:117] "RemoveContainer" containerID="2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7"
	Nov 01 09:44:11 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:11.147768     721 scope.go:117] "RemoveContainer" containerID="2e207389d6f98e949594cfa8f95ab8545cb4c630d7b63adb0d56ac752b8a41d7"
	Nov 01 09:44:11 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:11.147995     721 scope.go:117] "RemoveContainer" containerID="12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c"
	Nov 01 09:44:11 default-k8s-diff-port-927869 kubelet[721]: E1101 09:44:11.148228     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:44:16 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:16.299065     721 scope.go:117] "RemoveContainer" containerID="12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c"
	Nov 01 09:44:16 default-k8s-diff-port-927869 kubelet[721]: E1101 09:44:16.299255     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:44:27 default-k8s-diff-port-927869 kubelet[721]: I1101 09:44:27.009205     721 scope.go:117] "RemoveContainer" containerID="12a0df8efed6f84e7186551c376b5425ef0135e962bc200406ca9a99a5cb8c0c"
	Nov 01 09:44:27 default-k8s-diff-port-927869 kubelet[721]: E1101 09:44:27.010024     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vls9b_kubernetes-dashboard(7cd79d09-83e8-4da1-b944-eb989ee2e25d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vls9b" podUID="7cd79d09-83e8-4da1-b944-eb989ee2e25d"
	Nov 01 09:44:38 default-k8s-diff-port-927869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:44:38 default-k8s-diff-port-927869 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:44:38 default-k8s-diff-port-927869 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 01 09:44:38 default-k8s-diff-port-927869 systemd[1]: kubelet.service: Consumed 1.957s CPU time.
	
	
	==> kubernetes-dashboard [00b75599b3e3df738c3ef77c230023d8b285f8006296b36c85c3b173e3298562] <==
	2025/11/01 09:43:59 Using namespace: kubernetes-dashboard
	2025/11/01 09:43:59 Using in-cluster config to connect to apiserver
	2025/11/01 09:43:59 Using secret token for csrf signing
	2025/11/01 09:43:59 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/01 09:43:59 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/01 09:43:59 Successful initial request to the apiserver, version: v1.34.1
	2025/11/01 09:43:59 Generating JWE encryption key
	2025/11/01 09:43:59 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/01 09:43:59 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/01 09:43:59 Initializing JWE encryption key from synchronized object
	2025/11/01 09:43:59 Creating in-cluster Sidecar client
	2025/11/01 09:43:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:59 Serving insecurely on HTTP port: 9090
	2025/11/01 09:44:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/01 09:43:59 Starting overwatch
	
	
	==> storage-provisioner [2f0e301a1717f8b28ff95585a1949a7a840d03416646f8941429f60b86feec30] <==
	I1101 09:43:47.456014       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1101 09:43:47.465665       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [4533552a7388f145ce63bce88a52b1a47182b6698396974b26aaad8808fef05b] <==
	W1101 09:44:17.731646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.735693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:19.740889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.744702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:21.752710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.756737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:23.762024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:25.764803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:25.769327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:27.772891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:27.777364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:29.781018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:29.786656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:31.789855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:31.794999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:33.799344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:33.803780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:35.806482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:35.811772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:37.815251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:37.825955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:39.829826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:39.834027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:41.837250       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1101 09:44:41.841875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869: exit status 2 (362.917041ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-927869 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-722387 --alsologtostderr -v=1
E1101 09:44:50.416791  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-722387 --alsologtostderr -v=1: exit status 80 (1.535043663s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-722387 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:44:49.931865  435240 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:49.932139  435240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:49.932150  435240 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:49.932154  435240 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:49.932369  435240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:49.932626  435240 out.go:368] Setting JSON to false
	I1101 09:44:49.932674  435240 mustload.go:66] Loading cluster: newest-cni-722387
	I1101 09:44:49.933107  435240 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:49.933718  435240 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:49.953249  435240 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:49.953551  435240 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:50.018263  435240 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-01 09:44:50.005928798 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:50.019131  435240 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-722387 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1101 09:44:50.021165  435240 out.go:179] * Pausing node newest-cni-722387 ... 
	I1101 09:44:50.022698  435240 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:50.023013  435240 ssh_runner.go:195] Run: systemctl --version
	I1101 09:44:50.023058  435240 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:50.042752  435240 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:50.143132  435240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:50.156749  435240 pause.go:52] kubelet running: true
	I1101 09:44:50.156857  435240 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:50.291677  435240 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:50.291794  435240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:50.360639  435240 cri.go:89] found id: "f38942d79414deb4d6d4fba9f256d16f028afa0decd341b909a66581363182cd"
	I1101 09:44:50.360661  435240 cri.go:89] found id: "6f51ecac5c19ac31954412812478dc448fea1a3d068f79b2c814fe1db5ae5ec4"
	I1101 09:44:50.360665  435240 cri.go:89] found id: "8c9de05b45c279cd3c244c00a959581d8649bb7f6bf3eb6fa42032a304c39b00"
	I1101 09:44:50.360668  435240 cri.go:89] found id: "0c3e2ddaf2952e68eb50ec0e96d0420fae0487c0267bab0f5fbb97977f8fc6a7"
	I1101 09:44:50.360671  435240 cri.go:89] found id: "5e73866046dcd3375992b040c38f3429a03f7320f9bf75365a3ed14380282331"
	I1101 09:44:50.360675  435240 cri.go:89] found id: "d99ec39de9349dcd9453f38fee56ffbfa79a124b7674dc6b9aab0f30439608df"
	I1101 09:44:50.360679  435240 cri.go:89] found id: ""
	I1101 09:44:50.360726  435240 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:50.372484  435240 retry.go:31] will retry after 224.061769ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:50Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:50.596985  435240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:50.610360  435240 pause.go:52] kubelet running: false
	I1101 09:44:50.610437  435240 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:50.727388  435240 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:50.727515  435240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:50.799748  435240 cri.go:89] found id: "f38942d79414deb4d6d4fba9f256d16f028afa0decd341b909a66581363182cd"
	I1101 09:44:50.799785  435240 cri.go:89] found id: "6f51ecac5c19ac31954412812478dc448fea1a3d068f79b2c814fe1db5ae5ec4"
	I1101 09:44:50.799793  435240 cri.go:89] found id: "8c9de05b45c279cd3c244c00a959581d8649bb7f6bf3eb6fa42032a304c39b00"
	I1101 09:44:50.799799  435240 cri.go:89] found id: "0c3e2ddaf2952e68eb50ec0e96d0420fae0487c0267bab0f5fbb97977f8fc6a7"
	I1101 09:44:50.799804  435240 cri.go:89] found id: "5e73866046dcd3375992b040c38f3429a03f7320f9bf75365a3ed14380282331"
	I1101 09:44:50.799810  435240 cri.go:89] found id: "d99ec39de9349dcd9453f38fee56ffbfa79a124b7674dc6b9aab0f30439608df"
	I1101 09:44:50.799816  435240 cri.go:89] found id: ""
	I1101 09:44:50.799995  435240 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:50.812152  435240 retry.go:31] will retry after 367.652642ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:50Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:51.180849  435240 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:44:51.194822  435240 pause.go:52] kubelet running: false
	I1101 09:44:51.194880  435240 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1101 09:44:51.308576  435240 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1101 09:44:51.308678  435240 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1101 09:44:51.379666  435240 cri.go:89] found id: "f38942d79414deb4d6d4fba9f256d16f028afa0decd341b909a66581363182cd"
	I1101 09:44:51.379696  435240 cri.go:89] found id: "6f51ecac5c19ac31954412812478dc448fea1a3d068f79b2c814fe1db5ae5ec4"
	I1101 09:44:51.379702  435240 cri.go:89] found id: "8c9de05b45c279cd3c244c00a959581d8649bb7f6bf3eb6fa42032a304c39b00"
	I1101 09:44:51.379708  435240 cri.go:89] found id: "0c3e2ddaf2952e68eb50ec0e96d0420fae0487c0267bab0f5fbb97977f8fc6a7"
	I1101 09:44:51.379712  435240 cri.go:89] found id: "5e73866046dcd3375992b040c38f3429a03f7320f9bf75365a3ed14380282331"
	I1101 09:44:51.379717  435240 cri.go:89] found id: "d99ec39de9349dcd9453f38fee56ffbfa79a124b7674dc6b9aab0f30439608df"
	I1101 09:44:51.379721  435240 cri.go:89] found id: ""
	I1101 09:44:51.379781  435240 ssh_runner.go:195] Run: sudo runc list -f json
	I1101 09:44:51.395064  435240 out.go:203] 
	W1101 09:44:51.396585  435240 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1101 09:44:51.396605  435240 out.go:285] * 
	* 
	W1101 09:44:51.400848  435240 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1101 09:44:51.402200  435240 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-722387 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-722387
helpers_test.go:243: (dbg) docker inspect newest-cni-722387:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6",
	        "Created": "2025-11-01T09:44:06.484487044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 431364,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:44:38.789922469Z",
	            "FinishedAt": "2025-11-01T09:44:37.823584839Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/hosts",
	        "LogPath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6-json.log",
	        "Name": "/newest-cni-722387",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-722387:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-722387",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6",
	                "LowerDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-722387",
	                "Source": "/var/lib/docker/volumes/newest-cni-722387/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-722387",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-722387",
	                "name.minikube.sigs.k8s.io": "newest-cni-722387",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d933cfafd96a4352eaa7d16ce718e1359577f09f0259075121bacf2ff02e9f07",
	            "SandboxKey": "/var/run/docker/netns/d933cfafd96a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-722387": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:bd:6a:8c:e9:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "097f5920ceba29035e623d66cf12db8333915593551ced6800060e5546bfb0e0",
	                    "EndpointID": "f32d8ad7b4cfdc6f76b68866410cec114d3790581b9269c5686d68479e7a1ea2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-722387",
	                        "5cc4aeec7217"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-722387 -n newest-cni-722387
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-722387 -n newest-cni-722387: exit status 2 (363.331441ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-722387 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ old-k8s-version-106430 image list --format=json                                                                                                                                                                                               │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ pause   │ -p old-k8s-version-106430 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ no-preload-224845 image list --format=json                                                                                                                                                                                                    │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p no-preload-224845 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-722387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ stop    │ -p newest-cni-722387 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ embed-certs-214580 image list --format=json                                                                                                                                                                                                   │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p embed-certs-214580 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p embed-certs-214580                                                                                                                                                                                                                         │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ default-k8s-diff-port-927869 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p default-k8s-diff-port-927869 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-722387 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p embed-certs-214580                                                                                                                                                                                                                         │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p default-k8s-diff-port-927869                                                                                                                                                                                                               │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p default-k8s-diff-port-927869                                                                                                                                                                                                               │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ newest-cni-722387 image list --format=json                                                                                                                                                                                                    │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p newest-cni-722387 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:44:38
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:44:38.523287  431145 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:38.523564  431145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:38.523574  431145 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:38.523578  431145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:38.523800  431145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:38.524267  431145 out.go:368] Setting JSON to false
	I1101 09:44:38.525629  431145 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5217,"bootTime":1761985062,"procs":437,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:44:38.525727  431145 start.go:143] virtualization: kvm guest
	I1101 09:44:38.527859  431145 out.go:179] * [newest-cni-722387] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:44:38.529045  431145 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:44:38.529114  431145 notify.go:221] Checking for updates...
	I1101 09:44:38.531328  431145 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:44:38.533047  431145 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:38.534417  431145 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:44:38.535653  431145 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:44:38.537039  431145 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:44:38.538738  431145 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:38.539220  431145 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:44:38.565195  431145 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:44:38.565381  431145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:38.631657  431145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-01 09:44:38.621287477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:38.631767  431145 docker.go:319] overlay module found
	I1101 09:44:38.633719  431145 out.go:179] * Using the docker driver based on existing profile
	I1101 09:44:38.635183  431145 start.go:309] selected driver: docker
	I1101 09:44:38.635201  431145 start.go:930] validating driver "docker" against &{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:38.635281  431145 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:44:38.635786  431145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:38.703758  431145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-01 09:44:38.691419349 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:38.704058  431145 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:38.704091  431145 cni.go:84] Creating CNI manager for ""
	I1101 09:44:38.704133  431145 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:38.704164  431145 start.go:353] cluster config:
	{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:38.706149  431145 out.go:179] * Starting "newest-cni-722387" primary control-plane node in "newest-cni-722387" cluster
	I1101 09:44:38.707131  431145 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:44:38.708418  431145 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:44:38.709522  431145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:38.709565  431145 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:44:38.709574  431145 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:44:38.709686  431145 cache.go:59] Caching tarball of preloaded images
	I1101 09:44:38.709764  431145 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:44:38.709778  431145 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:44:38.709898  431145 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:38.731867  431145 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:44:38.731884  431145 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:44:38.731903  431145 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:44:38.731970  431145 start.go:360] acquireMachinesLock for newest-cni-722387: {Name:mk940a2cf467ead4a4947b13278d9e50da243cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:44:38.732043  431145 start.go:364] duration metric: took 47.245µs to acquireMachinesLock for "newest-cni-722387"
	I1101 09:44:38.732065  431145 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:44:38.732073  431145 fix.go:54] fixHost starting: 
	I1101 09:44:38.732264  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:38.758201  431145 fix.go:112] recreateIfNeeded on newest-cni-722387: state=Stopped err=<nil>
	W1101 09:44:38.758255  431145 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:44:38.760106  431145 out.go:252] * Restarting existing docker container for "newest-cni-722387" ...
	I1101 09:44:38.760187  431145 cli_runner.go:164] Run: docker start newest-cni-722387
	I1101 09:44:39.054166  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:39.076615  431145 kic.go:430] container "newest-cni-722387" state is running.
	I1101 09:44:39.077052  431145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:39.099415  431145 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:39.099699  431145 machine.go:94] provisionDockerMachine start ...
	I1101 09:44:39.099782  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:39.122460  431145 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:39.122803  431145 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 09:44:39.122824  431145 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:44:39.123514  431145 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56538->127.0.0.1:33133: read: connection reset by peer
	I1101 09:44:42.272738  431145 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-722387
	
	I1101 09:44:42.272774  431145 ubuntu.go:182] provisioning hostname "newest-cni-722387"
	I1101 09:44:42.272835  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:42.293600  431145 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:42.293871  431145 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 09:44:42.293887  431145 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-722387 && echo "newest-cni-722387" | sudo tee /etc/hostname
	I1101 09:44:42.452170  431145 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-722387
	
	I1101 09:44:42.452269  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:42.478775  431145 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:42.479098  431145 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 09:44:42.479124  431145 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-722387' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-722387/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-722387' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:44:42.628092  431145 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:44:42.628131  431145 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:44:42.628188  431145 ubuntu.go:190] setting up certificates
	I1101 09:44:42.628201  431145 provision.go:84] configureAuth start
	I1101 09:44:42.628256  431145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:42.648832  431145 provision.go:143] copyHostCerts
	I1101 09:44:42.648935  431145 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:44:42.648962  431145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:44:42.649053  431145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:44:42.649183  431145 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:44:42.649199  431145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:44:42.649240  431145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:44:42.649329  431145 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:44:42.649341  431145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:44:42.649384  431145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:44:42.649467  431145 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.newest-cni-722387 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-722387]
	I1101 09:44:42.874468  431145 provision.go:177] copyRemoteCerts
	I1101 09:44:42.874532  431145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:44:42.874571  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:42.896087  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:42.998810  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:44:43.019227  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:44:43.039961  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:44:43.059985  431145 provision.go:87] duration metric: took 431.765832ms to configureAuth
	I1101 09:44:43.060019  431145 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:44:43.060213  431145 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:43.060333  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:43.083085  431145 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:43.083441  431145 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 09:44:43.083477  431145 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:44:43.378405  431145 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:44:43.378430  431145 machine.go:97] duration metric: took 4.278714731s to provisionDockerMachine
	I1101 09:44:43.378444  431145 start.go:293] postStartSetup for "newest-cni-722387" (driver="docker")
	I1101 09:44:43.378455  431145 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:44:43.378525  431145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:44:43.378566  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:43.398034  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:43.503829  431145 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:44:43.507566  431145 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:44:43.507595  431145 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:44:43.507608  431145 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:44:43.507674  431145 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:44:43.507790  431145 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:44:43.507906  431145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:44:43.516583  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:44:43.537594  431145 start.go:296] duration metric: took 159.127541ms for postStartSetup
	I1101 09:44:43.537672  431145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:44:43.537715  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:43.557603  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:43.658011  431145 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:44:43.662900  431145 fix.go:56] duration metric: took 4.930817988s for fixHost
	I1101 09:44:43.662938  431145 start.go:83] releasing machines lock for "newest-cni-722387", held for 4.930881151s
	I1101 09:44:43.663016  431145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:43.682297  431145 ssh_runner.go:195] Run: cat /version.json
	I1101 09:44:43.682325  431145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:44:43.682357  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:43.682389  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:43.706136  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:43.706904  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:43.805347  431145 ssh_runner.go:195] Run: systemctl --version
	I1101 09:44:43.874184  431145 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:44:43.911729  431145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:44:43.916853  431145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:44:43.916969  431145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:44:43.925865  431145 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:44:43.925896  431145 start.go:496] detecting cgroup driver to use...
	I1101 09:44:43.925945  431145 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:44:43.925990  431145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:44:43.943991  431145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:44:43.957957  431145 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:44:43.958025  431145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:44:43.978662  431145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:44:43.998150  431145 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:44:44.095766  431145 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:44:44.186660  431145 docker.go:234] disabling docker service ...
	I1101 09:44:44.186734  431145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:44:44.203509  431145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:44:44.219637  431145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:44:44.305973  431145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:44:44.397941  431145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:44:44.412445  431145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:44:44.428398  431145 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:44:44.428453  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.439202  431145 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:44:44.439274  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.449692  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.459447  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.469010  431145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:44:44.478774  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.488588  431145 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.497828  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.508106  431145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:44:44.516197  431145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:44:44.524098  431145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:44.611350  431145 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:44:44.735128  431145 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:44:44.735190  431145 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:44:44.739252  431145 start.go:564] Will wait 60s for crictl version
	I1101 09:44:44.739322  431145 ssh_runner.go:195] Run: which crictl
	I1101 09:44:44.743116  431145 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:44:44.767513  431145 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:44:44.767608  431145 ssh_runner.go:195] Run: crio --version
	I1101 09:44:44.796054  431145 ssh_runner.go:195] Run: crio --version
	I1101 09:44:44.827556  431145 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:44:44.829152  431145 cli_runner.go:164] Run: docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:44:44.847295  431145 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 09:44:44.851625  431145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:44:44.864005  431145 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 09:44:44.865476  431145 kubeadm.go:884] updating cluster {Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:44:44.865641  431145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:44.865713  431145 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:44:44.899288  431145 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:44:44.899312  431145 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:44:44.899364  431145 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:44:44.925419  431145 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:44:44.925444  431145 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:44:44.925455  431145 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1101 09:44:44.925557  431145 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-722387 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:44:44.925669  431145 ssh_runner.go:195] Run: crio config
	I1101 09:44:44.976014  431145 cni.go:84] Creating CNI manager for ""
	I1101 09:44:44.976036  431145 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:44.976055  431145 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 09:44:44.976077  431145 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-722387 NodeName:newest-cni-722387 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:44:44.976198  431145 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-722387"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:44:44.976263  431145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:44:44.984901  431145 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:44:44.985010  431145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:44:44.993322  431145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 09:44:45.007451  431145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:44:45.021715  431145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 09:44:45.034982  431145 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:44:45.038773  431145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:44:45.049453  431145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:45.129091  431145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:44:45.159551  431145 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387 for IP: 192.168.103.2
	I1101 09:44:45.159578  431145 certs.go:195] generating shared ca certs ...
	I1101 09:44:45.159602  431145 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:45.159751  431145 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:44:45.159791  431145 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:44:45.159800  431145 certs.go:257] generating profile certs ...
	I1101 09:44:45.159878  431145 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.key
	I1101 09:44:45.159960  431145 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae
	I1101 09:44:45.159995  431145 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key
	I1101 09:44:45.160089  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:44:45.160116  431145 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:44:45.160126  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:44:45.160146  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:44:45.160169  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:44:45.160191  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:44:45.160228  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:44:45.160785  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:44:45.181812  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:44:45.202900  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:44:45.222305  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:44:45.246674  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:44:45.267436  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:44:45.285685  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:44:45.303459  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:44:45.321214  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:44:45.339617  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:44:45.358270  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:44:45.381211  431145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:44:45.397793  431145 ssh_runner.go:195] Run: openssl version
	I1101 09:44:45.405043  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:44:45.414889  431145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:45.419196  431145 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:45.419258  431145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:45.462199  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:44:45.470808  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:44:45.480691  431145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:44:45.485187  431145 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:44:45.485249  431145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:44:45.520415  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:44:45.529642  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:44:45.539052  431145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:44:45.543154  431145 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:44:45.543216  431145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:44:45.579183  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:44:45.588088  431145 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:44:45.592274  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:44:45.628969  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:44:45.669055  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:44:45.715399  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:44:45.765976  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:44:45.825051  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:44:45.873898  431145 kubeadm.go:401] StartCluster: {Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:45.874050  431145 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:44:45.874116  431145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:44:45.906987  431145 cri.go:89] found id: "8c9de05b45c279cd3c244c00a959581d8649bb7f6bf3eb6fa42032a304c39b00"
	I1101 09:44:45.907017  431145 cri.go:89] found id: "0c3e2ddaf2952e68eb50ec0e96d0420fae0487c0267bab0f5fbb97977f8fc6a7"
	I1101 09:44:45.907024  431145 cri.go:89] found id: "5e73866046dcd3375992b040c38f3429a03f7320f9bf75365a3ed14380282331"
	I1101 09:44:45.907029  431145 cri.go:89] found id: "d99ec39de9349dcd9453f38fee56ffbfa79a124b7674dc6b9aab0f30439608df"
	I1101 09:44:45.907032  431145 cri.go:89] found id: ""
	I1101 09:44:45.907082  431145 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:44:45.920243  431145 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:45Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:45.920312  431145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:44:45.928805  431145 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:44:45.928828  431145 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:44:45.928887  431145 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:44:45.937804  431145 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:44:45.938517  431145 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-722387" does not appear in /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:45.938842  431145 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-104443/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-722387" cluster setting kubeconfig missing "newest-cni-722387" context setting]
	I1101 09:44:45.939439  431145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:45.941210  431145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:44:45.951281  431145 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1101 09:44:45.951320  431145 kubeadm.go:602] duration metric: took 22.485962ms to restartPrimaryControlPlane
	I1101 09:44:45.951331  431145 kubeadm.go:403] duration metric: took 77.447685ms to StartCluster
	I1101 09:44:45.951352  431145 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:45.951427  431145 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:45.952604  431145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:45.952892  431145 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:44:45.952992  431145 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:44:45.953092  431145 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-722387"
	I1101 09:44:45.953112  431145 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-722387"
	I1101 09:44:45.953110  431145 addons.go:70] Setting dashboard=true in profile "newest-cni-722387"
	I1101 09:44:45.953124  431145 addons.go:70] Setting default-storageclass=true in profile "newest-cni-722387"
	I1101 09:44:45.953137  431145 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-722387"
	I1101 09:44:45.953138  431145 addons.go:239] Setting addon dashboard=true in "newest-cni-722387"
	W1101 09:44:45.953149  431145 addons.go:248] addon dashboard should already be in state true
	I1101 09:44:45.953154  431145 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:45.953181  431145 host.go:66] Checking if "newest-cni-722387" exists ...
	W1101 09:44:45.953119  431145 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:44:45.953228  431145 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:45.953478  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:45.953621  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:45.953671  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:45.955857  431145 out.go:179] * Verifying Kubernetes components...
	I1101 09:44:45.957450  431145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:45.980922  431145 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:44:45.981057  431145 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:44:45.981080  431145 addons.go:239] Setting addon default-storageclass=true in "newest-cni-722387"
	W1101 09:44:45.981098  431145 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:44:45.981127  431145 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:45.981614  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:45.982219  431145 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:44:45.982236  431145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:44:45.982290  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:45.983874  431145 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:44:45.985175  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:44:45.985192  431145 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:44:45.985253  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:46.012192  431145 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:44:46.012220  431145 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:44:46.012305  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:46.019255  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:46.024050  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:46.039888  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:46.130982  431145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:44:46.149012  431145 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:44:46.149216  431145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:44:46.152603  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:44:46.152632  431145 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:44:46.162406  431145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:44:46.170141  431145 api_server.go:72] duration metric: took 217.196033ms to wait for apiserver process to appear ...
	I1101 09:44:46.170173  431145 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:44:46.170197  431145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:46.174261  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:44:46.174290  431145 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:44:46.175301  431145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:44:46.194605  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:44:46.194670  431145 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:44:46.218647  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:44:46.218681  431145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:44:46.237252  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:44:46.237280  431145 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:44:46.254336  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:44:46.254363  431145 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:44:46.274725  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:44:46.274752  431145 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:44:46.290997  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:44:46.291026  431145 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:44:46.308649  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:44:46.308678  431145 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:44:46.327829  431145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:44:47.697141  431145 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:44:47.697190  431145 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:44:47.697212  431145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:47.702357  431145 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1101 09:44:47.702388  431145 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1101 09:44:48.171067  431145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:48.175139  431145 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:44:48.175170  431145 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:44:48.238952  431145 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.076486284s)
	I1101 09:44:48.238973  431145 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.063640647s)
	I1101 09:44:48.239110  431145 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.911246598s)
	I1101 09:44:48.241342  431145 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-722387 addons enable metrics-server
	
	I1101 09:44:48.250599  431145 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:44:48.252084  431145 addons.go:515] duration metric: took 2.299090341s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:44:48.670953  431145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:48.675373  431145 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:44:48.675404  431145 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:44:49.171049  431145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:49.175504  431145 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 09:44:49.176558  431145 api_server.go:141] control plane version: v1.34.1
	I1101 09:44:49.176588  431145 api_server.go:131] duration metric: took 3.006407657s to wait for apiserver health ...
	I1101 09:44:49.176603  431145 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:44:49.180291  431145 system_pods.go:59] 8 kube-system pods found
	I1101 09:44:49.180323  431145 system_pods.go:61] "coredns-66bc5c9577-sbh67" [855a1e98-2e65-46b2-b887-ecc758fa3162] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:44:49.180331  431145 system_pods.go:61] "etcd-newest-cni-722387" [db6d9615-3fd5-4642-abb7-9c060c90d98e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:44:49.180339  431145 system_pods.go:61] "kindnet-vq8r5" [0e3ba1a9-d43e-4944-bd85-a7858465eeb5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:44:49.180345  431145 system_pods.go:61] "kube-apiserver-newest-cni-722387" [8e6d728a-c7de-4b60-8627-f4e2729f14b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:44:49.180351  431145 system_pods.go:61] "kube-controller-manager-newest-cni-722387" [a0094ce2-c3fe-4f6f-9f2b-7d9871577296] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:44:49.180356  431145 system_pods.go:61] "kube-proxy-rxnwv" [b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:44:49.180362  431145 system_pods.go:61] "kube-scheduler-newest-cni-722387" [8c1c8755-a1ca-4aa2-894c-b7ae1e5f1ab6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:44:49.180367  431145 system_pods.go:61] "storage-provisioner" [cca90c7a-0f05-4855-ba4d-530a67715840] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:44:49.180378  431145 system_pods.go:74] duration metric: took 3.764919ms to wait for pod list to return data ...
	I1101 09:44:49.180389  431145 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:44:49.182953  431145 default_sa.go:45] found service account: "default"
	I1101 09:44:49.182973  431145 default_sa.go:55] duration metric: took 2.578627ms for default service account to be created ...
	I1101 09:44:49.182987  431145 kubeadm.go:587] duration metric: took 3.230047702s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:49.183001  431145 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:44:49.185699  431145 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:44:49.185726  431145 node_conditions.go:123] node cpu capacity is 8
	I1101 09:44:49.185738  431145 node_conditions.go:105] duration metric: took 2.732658ms to run NodePressure ...
	I1101 09:44:49.185750  431145 start.go:242] waiting for startup goroutines ...
	I1101 09:44:49.185760  431145 start.go:247] waiting for cluster config update ...
	I1101 09:44:49.185774  431145 start.go:256] writing updated cluster config ...
	I1101 09:44:49.186121  431145 ssh_runner.go:195] Run: rm -f paused
	I1101 09:44:49.237563  431145 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:44:49.239538  431145 out.go:179] * Done! kubectl is now configured to use "newest-cni-722387" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.538642921Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-rxnwv/POD" id=48768ec7-e0f1-4ef1-bc52-9516198d0cc9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.538744761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.539831879Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.540693533Z" level=info msg="Ran pod sandbox d21c985e76aa6d326ff83ace7f18bf88eea00a0d0b3d8e1600ad69bf0bc63f6b with infra container: kube-system/kindnet-vq8r5/POD" id=b0aa7878-4e25-413b-8a4a-da3819437144 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.54200373Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0475fdeb-1300-4b95-b844-4f1c7902f716 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.542522727Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=48768ec7-e0f1-4ef1-bc52-9516198d0cc9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.54294348Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=931cd5d5-ece0-4f0b-96ce-30ca02520d92 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.544142909Z" level=info msg="Creating container: kube-system/kindnet-vq8r5/kindnet-cni" id=eb5f4184-fa3a-4e03-af22-822ba44527d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.544229469Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.544241685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.544861438Z" level=info msg="Ran pod sandbox c0a8949892c04a9ea729b4726e2b89418ae56dcfa0ac75f64ee600147b5ab0b2 with infra container: kube-system/kube-proxy-rxnwv/POD" id=48768ec7-e0f1-4ef1-bc52-9516198d0cc9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.545836341Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6f29e1bb-2fe9-4a7f-9145-3917116456b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.547863592Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fd637943-2c5c-4a44-a357-ec0ba5cb8708 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.548551717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.549008124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.549375666Z" level=info msg="Creating container: kube-system/kube-proxy-rxnwv/kube-proxy" id=24b34801-7ebc-4b69-980c-986d1234bc6e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.549477279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.55387464Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.554564104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.578675307Z" level=info msg="Created container 6f51ecac5c19ac31954412812478dc448fea1a3d068f79b2c814fe1db5ae5ec4: kube-system/kindnet-vq8r5/kindnet-cni" id=eb5f4184-fa3a-4e03-af22-822ba44527d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.57940374Z" level=info msg="Starting container: 6f51ecac5c19ac31954412812478dc448fea1a3d068f79b2c814fe1db5ae5ec4" id=7a70662f-7799-454a-9816-ef74e22482f3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.581134892Z" level=info msg="Created container f38942d79414deb4d6d4fba9f256d16f028afa0decd341b909a66581363182cd: kube-system/kube-proxy-rxnwv/kube-proxy" id=24b34801-7ebc-4b69-980c-986d1234bc6e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.581400843Z" level=info msg="Started container" PID=1036 containerID=6f51ecac5c19ac31954412812478dc448fea1a3d068f79b2c814fe1db5ae5ec4 description=kube-system/kindnet-vq8r5/kindnet-cni id=7a70662f-7799-454a-9816-ef74e22482f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d21c985e76aa6d326ff83ace7f18bf88eea00a0d0b3d8e1600ad69bf0bc63f6b
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.581861239Z" level=info msg="Starting container: f38942d79414deb4d6d4fba9f256d16f028afa0decd341b909a66581363182cd" id=74ad86a2-9d2d-490d-9e1b-d04d5f920c33 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.584625915Z" level=info msg="Started container" PID=1037 containerID=f38942d79414deb4d6d4fba9f256d16f028afa0decd341b909a66581363182cd description=kube-system/kube-proxy-rxnwv/kube-proxy id=74ad86a2-9d2d-490d-9e1b-d04d5f920c33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0a8949892c04a9ea729b4726e2b89418ae56dcfa0ac75f64ee600147b5ab0b2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f38942d79414d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   3 seconds ago       Running             kube-proxy                1                   c0a8949892c04       kube-proxy-rxnwv                            kube-system
	6f51ecac5c19a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   3 seconds ago       Running             kindnet-cni               1                   d21c985e76aa6       kindnet-vq8r5                               kube-system
	8c9de05b45c27       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 seconds ago       Running             kube-controller-manager   1                   3120283fbceaf       kube-controller-manager-newest-cni-722387   kube-system
	0c3e2ddaf2952       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 seconds ago       Running             kube-scheduler            1                   d78e9e4f2bdd6       kube-scheduler-newest-cni-722387            kube-system
	5e73866046dcd       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 seconds ago       Running             kube-apiserver            1                   c0711e4e294f5       kube-apiserver-newest-cni-722387            kube-system
	d99ec39de9349       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 seconds ago       Running             etcd                      1                   c631c357aff3f       etcd-newest-cni-722387                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-722387
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-722387
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=newest-cni-722387
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_44_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:44:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-722387
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:44:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:44:47 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:44:47 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:44:47 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 09:44:47 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-722387
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                ae9053f9-594c-4df9-adeb-a6fd802f163d
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-722387                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-vq8r5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-722387             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-newest-cni-722387    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-rxnwv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-722387             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 3s    kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node newest-cni-722387 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node newest-cni-722387 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node newest-cni-722387 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node newest-cni-722387 event: Registered Node newest-cni-722387 in Controller
	  Normal  RegisteredNode           1s    node-controller  Node newest-cni-722387 event: Registered Node newest-cni-722387 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [d99ec39de9349dcd9453f38fee56ffbfa79a124b7674dc6b9aab0f30439608df] <==
	{"level":"warn","ts":"2025-11-01T09:44:47.000500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.010141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.017373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.026058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.033055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.039995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.046811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.053438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.060965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.082847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.091748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.099607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.107295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.115877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.124057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.135110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.148269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.156101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.164232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.172618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.180557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.197354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.204495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.212419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.261640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34496","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:44:52 up  1:27,  0 user,  load average: 7.56, 5.85, 3.52
	Linux newest-cni-722387 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6f51ecac5c19ac31954412812478dc448fea1a3d068f79b2c814fe1db5ae5ec4] <==
	I1101 09:44:48.832149       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:44:48.832456       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:44:48.832610       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:44:48.832629       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:44:48.832650       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:44:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:44:49.035223       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:44:49.035253       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:44:49.035265       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:44:49.035473       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:44:49.636245       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:44:49.636280       1 metrics.go:72] Registering metrics
	I1101 09:44:49.636350       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [5e73866046dcd3375992b040c38f3429a03f7320f9bf75365a3ed14380282331] <==
	I1101 09:44:47.778246       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:44:47.778227       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:44:47.778366       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:44:47.779133       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:44:47.779161       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:44:47.779191       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:44:47.778971       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:44:47.779574       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:44:47.785833       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:44:47.797016       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:44:47.803323       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:44:47.803339       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:44:48.044496       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:44:48.079601       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:44:48.102322       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:44:48.110662       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:44:48.117566       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:44:48.153374       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.29.167"}
	I1101 09:44:48.164038       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.103.174"}
	I1101 09:44:48.682220       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:44:51.487968       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:44:51.488013       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:44:51.538334       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:44:51.639325       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:44:51.639325       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8c9de05b45c279cd3c244c00a959581d8649bb7f6bf3eb6fa42032a304c39b00] <==
	I1101 09:44:51.095600       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:44:51.097746       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:44:51.109693       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:44:51.115103       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:44:51.117429       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:44:51.119642       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:44:51.124378       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:44:51.126989       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:44:51.134501       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:44:51.134532       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:44:51.134549       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:44:51.134584       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:44:51.134671       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:44:51.134687       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:44:51.134767       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:44:51.134706       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:44:51.134794       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:44:51.136604       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:44:51.138704       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:44:51.139776       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:44:51.142030       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:44:51.144268       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:44:51.148628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:44:51.151087       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:44:51.153667       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f38942d79414deb4d6d4fba9f256d16f028afa0decd341b909a66581363182cd] <==
	I1101 09:44:48.620482       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:44:48.702389       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:44:48.803385       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:44:48.803432       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1101 09:44:48.803542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:44:48.826738       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:44:48.826794       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:44:48.833630       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:44:48.834096       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:44:48.834141       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:44:48.835727       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:44:48.835757       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:44:48.835762       1 config.go:200] "Starting service config controller"
	I1101 09:44:48.835780       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:44:48.835784       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:44:48.835790       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:44:48.835804       1 config.go:309] "Starting node config controller"
	I1101 09:44:48.835809       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:44:48.936700       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:44:48.936716       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:44:48.936749       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:44:48.936770       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0c3e2ddaf2952e68eb50ec0e96d0420fae0487c0267bab0f5fbb97977f8fc6a7] <==
	I1101 09:44:46.119293       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:44:47.883866       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:44:47.883936       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:44:47.889201       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:44:47.889235       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:44:47.889248       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:44:47.889261       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:44:47.889301       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:44:47.889313       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:44:47.889468       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:44:47.889788       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:44:47.989548       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:44:47.989571       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:44:47.989643       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.734969     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.808149     665 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.808265     665 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.808312     665 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.809655     665 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: E1101 09:44:47.848712     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-722387\" already exists" pod="kube-system/kube-controller-manager-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.848757     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: E1101 09:44:47.856985     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-722387\" already exists" pod="kube-system/kube-scheduler-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.857032     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: E1101 09:44:47.864294     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-722387\" already exists" pod="kube-system/etcd-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.864336     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: E1101 09:44:47.874250     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-722387\" already exists" pod="kube-system/kube-apiserver-newest-cni-722387"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.228314     665 apiserver.go:52] "Watching apiserver"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.234831     665 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.277274     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-722387"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: E1101 09:44:48.283702     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-722387\" already exists" pod="kube-system/kube-apiserver-newest-cni-722387"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.295938     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0-xtables-lock\") pod \"kube-proxy-rxnwv\" (UID: \"b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0\") " pod="kube-system/kube-proxy-rxnwv"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.295976     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-lib-modules\") pod \"kindnet-vq8r5\" (UID: \"0e3ba1a9-d43e-4944-bd85-a7858465eeb5\") " pod="kube-system/kindnet-vq8r5"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.296022     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0-lib-modules\") pod \"kube-proxy-rxnwv\" (UID: \"b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0\") " pod="kube-system/kube-proxy-rxnwv"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.296090     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-cni-cfg\") pod \"kindnet-vq8r5\" (UID: \"0e3ba1a9-d43e-4944-bd85-a7858465eeb5\") " pod="kube-system/kindnet-vq8r5"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.296132     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-xtables-lock\") pod \"kindnet-vq8r5\" (UID: \"0e3ba1a9-d43e-4944-bd85-a7858465eeb5\") " pod="kube-system/kindnet-vq8r5"
	Nov 01 09:44:50 newest-cni-722387 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:44:50 newest-cni-722387 kubelet[665]: I1101 09:44:50.269215     665 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 09:44:50 newest-cni-722387 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:44:50 newest-cni-722387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-722387 -n newest-cni-722387
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-722387 -n newest-cni-722387: exit status 2 (344.172405ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-722387 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-sbh67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-p2zcd kubernetes-dashboard-855c9754f9-gnmwl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-722387 describe pod coredns-66bc5c9577-sbh67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-p2zcd kubernetes-dashboard-855c9754f9-gnmwl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-722387 describe pod coredns-66bc5c9577-sbh67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-p2zcd kubernetes-dashboard-855c9754f9-gnmwl: exit status 1 (63.563733ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-sbh67" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-p2zcd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gnmwl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-722387 describe pod coredns-66bc5c9577-sbh67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-p2zcd kubernetes-dashboard-855c9754f9-gnmwl: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-722387
helpers_test.go:243: (dbg) docker inspect newest-cni-722387:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6",
	        "Created": "2025-11-01T09:44:06.484487044Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 431364,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-01T09:44:38.789922469Z",
	            "FinishedAt": "2025-11-01T09:44:37.823584839Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/hostname",
	        "HostsPath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/hosts",
	        "LogPath": "/var/lib/docker/containers/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6/5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6-json.log",
	        "Name": "/newest-cni-722387",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-722387:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-722387",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5cc4aeec7217ac8c213ff745fc12df3a271c9ca2718fe96ff6f8a1735026f1c6",
	                "LowerDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925-init/diff:/var/lib/docker/overlay2/3f68f4ee1c96313ff75c7c36c9b17862bf5776a73269f76fe3c4d01908f433ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5f26a2f83a3104e9238455376f7c71a6bba5468b15774938cc086f45a49bb925/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-722387",
	                "Source": "/var/lib/docker/volumes/newest-cni-722387/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-722387",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-722387",
	                "name.minikube.sigs.k8s.io": "newest-cni-722387",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d933cfafd96a4352eaa7d16ce718e1359577f09f0259075121bacf2ff02e9f07",
	            "SandboxKey": "/var/run/docker/netns/d933cfafd96a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-722387": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4a:bd:6a:8c:e9:a7",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "097f5920ceba29035e623d66cf12db8333915593551ced6800060e5546bfb0e0",
	                    "EndpointID": "f32d8ad7b4cfdc6f76b68866410cec114d3790581b9269c5686d68479e7a1ea2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-722387",
	                        "5cc4aeec7217"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-722387 -n newest-cni-722387
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-722387 -n newest-cni-722387: exit status 2 (336.416551ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-722387 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ start   │ -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ old-k8s-version-106430 image list --format=json                                                                                                                                                                                               │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:43 UTC │
	│ pause   │ -p old-k8s-version-106430 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:43 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ no-preload-224845 image list --format=json                                                                                                                                                                                                    │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p no-preload-224845 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p old-k8s-version-106430                                                                                                                                                                                                                     │ old-k8s-version-106430       │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p no-preload-224845                                                                                                                                                                                                                          │ no-preload-224845            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ addons  │ enable metrics-server -p newest-cni-722387 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ stop    │ -p newest-cni-722387 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ embed-certs-214580 image list --format=json                                                                                                                                                                                                   │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p embed-certs-214580 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ delete  │ -p embed-certs-214580                                                                                                                                                                                                                         │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ default-k8s-diff-port-927869 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p default-k8s-diff-port-927869 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-722387 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ start   │ -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p embed-certs-214580                                                                                                                                                                                                                         │ embed-certs-214580           │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p default-k8s-diff-port-927869                                                                                                                                                                                                               │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ delete  │ -p default-k8s-diff-port-927869                                                                                                                                                                                                               │ default-k8s-diff-port-927869 │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ image   │ newest-cni-722387 image list --format=json                                                                                                                                                                                                    │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │ 01 Nov 25 09:44 UTC │
	│ pause   │ -p newest-cni-722387 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-722387            │ jenkins │ v1.37.0 │ 01 Nov 25 09:44 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 09:44:38
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 09:44:38.523287  431145 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:44:38.523564  431145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:38.523574  431145 out.go:374] Setting ErrFile to fd 2...
	I1101 09:44:38.523578  431145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:44:38.523800  431145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:44:38.524267  431145 out.go:368] Setting JSON to false
	I1101 09:44:38.525629  431145 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":5217,"bootTime":1761985062,"procs":437,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:44:38.525727  431145 start.go:143] virtualization: kvm guest
	I1101 09:44:38.527859  431145 out.go:179] * [newest-cni-722387] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:44:38.529045  431145 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:44:38.529114  431145 notify.go:221] Checking for updates...
	I1101 09:44:38.531328  431145 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:44:38.533047  431145 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:38.534417  431145 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:44:38.535653  431145 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:44:38.537039  431145 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:44:38.538738  431145 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:38.539220  431145 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:44:38.565195  431145 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:44:38.565381  431145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:38.631657  431145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-01 09:44:38.621287477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:38.631767  431145 docker.go:319] overlay module found
	I1101 09:44:38.633719  431145 out.go:179] * Using the docker driver based on existing profile
	I1101 09:44:38.635183  431145 start.go:309] selected driver: docker
	I1101 09:44:38.635201  431145 start.go:930] validating driver "docker" against &{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:38.635281  431145 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:44:38.635786  431145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:44:38.703758  431145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-01 09:44:38.691419349 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:44:38.704058  431145 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:38.704091  431145 cni.go:84] Creating CNI manager for ""
	I1101 09:44:38.704133  431145 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:38.704164  431145 start.go:353] cluster config:
	{Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:38.706149  431145 out.go:179] * Starting "newest-cni-722387" primary control-plane node in "newest-cni-722387" cluster
	I1101 09:44:38.707131  431145 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 09:44:38.708418  431145 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1101 09:44:38.709522  431145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:38.709565  431145 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 09:44:38.709574  431145 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 09:44:38.709686  431145 cache.go:59] Caching tarball of preloaded images
	I1101 09:44:38.709764  431145 preload.go:233] Found /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1101 09:44:38.709778  431145 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1101 09:44:38.709898  431145 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:38.731867  431145 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1101 09:44:38.731884  431145 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1101 09:44:38.731903  431145 cache.go:233] Successfully downloaded all kic artifacts
	I1101 09:44:38.731970  431145 start.go:360] acquireMachinesLock for newest-cni-722387: {Name:mk940a2cf467ead4a4947b13278d9e50da243cc6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1101 09:44:38.732043  431145 start.go:364] duration metric: took 47.245µs to acquireMachinesLock for "newest-cni-722387"
	I1101 09:44:38.732065  431145 start.go:96] Skipping create...Using existing machine configuration
	I1101 09:44:38.732073  431145 fix.go:54] fixHost starting: 
	I1101 09:44:38.732264  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:38.758201  431145 fix.go:112] recreateIfNeeded on newest-cni-722387: state=Stopped err=<nil>
	W1101 09:44:38.758255  431145 fix.go:138] unexpected machine state, will restart: <nil>
	I1101 09:44:38.760106  431145 out.go:252] * Restarting existing docker container for "newest-cni-722387" ...
	I1101 09:44:38.760187  431145 cli_runner.go:164] Run: docker start newest-cni-722387
	I1101 09:44:39.054166  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:39.076615  431145 kic.go:430] container "newest-cni-722387" state is running.
	I1101 09:44:39.077052  431145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:39.099415  431145 profile.go:143] Saving config to /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/config.json ...
	I1101 09:44:39.099699  431145 machine.go:94] provisionDockerMachine start ...
	I1101 09:44:39.099782  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:39.122460  431145 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:39.122803  431145 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 09:44:39.122824  431145 main.go:143] libmachine: About to run SSH command:
	hostname
	I1101 09:44:39.123514  431145 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56538->127.0.0.1:33133: read: connection reset by peer
	I1101 09:44:42.272738  431145 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-722387
	
	I1101 09:44:42.272774  431145 ubuntu.go:182] provisioning hostname "newest-cni-722387"
	I1101 09:44:42.272835  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:42.293600  431145 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:42.293871  431145 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 09:44:42.293887  431145 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-722387 && echo "newest-cni-722387" | sudo tee /etc/hostname
	I1101 09:44:42.452170  431145 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-722387
	
	I1101 09:44:42.452269  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:42.478775  431145 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:42.479098  431145 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 09:44:42.479124  431145 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-722387' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-722387/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-722387' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1101 09:44:42.628092  431145 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1101 09:44:42.628131  431145 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21833-104443/.minikube CaCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21833-104443/.minikube}
	I1101 09:44:42.628188  431145 ubuntu.go:190] setting up certificates
	I1101 09:44:42.628201  431145 provision.go:84] configureAuth start
	I1101 09:44:42.628256  431145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:42.648832  431145 provision.go:143] copyHostCerts
	I1101 09:44:42.648935  431145 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem, removing ...
	I1101 09:44:42.648962  431145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem
	I1101 09:44:42.649053  431145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/ca.pem (1082 bytes)
	I1101 09:44:42.649183  431145 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem, removing ...
	I1101 09:44:42.649199  431145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem
	I1101 09:44:42.649240  431145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/cert.pem (1123 bytes)
	I1101 09:44:42.649329  431145 exec_runner.go:144] found /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem, removing ...
	I1101 09:44:42.649341  431145 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem
	I1101 09:44:42.649384  431145 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21833-104443/.minikube/key.pem (1679 bytes)
	I1101 09:44:42.649467  431145 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem org=jenkins.newest-cni-722387 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-722387]
	I1101 09:44:42.874468  431145 provision.go:177] copyRemoteCerts
	I1101 09:44:42.874532  431145 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1101 09:44:42.874571  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:42.896087  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:42.998810  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1101 09:44:43.019227  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1101 09:44:43.039961  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1101 09:44:43.059985  431145 provision.go:87] duration metric: took 431.765832ms to configureAuth
	I1101 09:44:43.060019  431145 ubuntu.go:206] setting minikube options for container-runtime
	I1101 09:44:43.060213  431145 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:43.060333  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:43.083085  431145 main.go:143] libmachine: Using SSH client type: native
	I1101 09:44:43.083441  431145 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1101 09:44:43.083477  431145 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1101 09:44:43.378405  431145 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1101 09:44:43.378430  431145 machine.go:97] duration metric: took 4.278714731s to provisionDockerMachine
	I1101 09:44:43.378444  431145 start.go:293] postStartSetup for "newest-cni-722387" (driver="docker")
	I1101 09:44:43.378455  431145 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1101 09:44:43.378525  431145 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1101 09:44:43.378566  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:43.398034  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:43.503829  431145 ssh_runner.go:195] Run: cat /etc/os-release
	I1101 09:44:43.507566  431145 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1101 09:44:43.507595  431145 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1101 09:44:43.507608  431145 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/addons for local assets ...
	I1101 09:44:43.507674  431145 filesync.go:126] Scanning /home/jenkins/minikube-integration/21833-104443/.minikube/files for local assets ...
	I1101 09:44:43.507790  431145 filesync.go:149] local asset: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem -> 1079552.pem in /etc/ssl/certs
	I1101 09:44:43.507906  431145 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1101 09:44:43.516583  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:44:43.537594  431145 start.go:296] duration metric: took 159.127541ms for postStartSetup
	I1101 09:44:43.537672  431145 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:44:43.537715  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:43.557603  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:43.658011  431145 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1101 09:44:43.662900  431145 fix.go:56] duration metric: took 4.930817988s for fixHost
	I1101 09:44:43.662938  431145 start.go:83] releasing machines lock for "newest-cni-722387", held for 4.930881151s
	I1101 09:44:43.663016  431145 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-722387
	I1101 09:44:43.682297  431145 ssh_runner.go:195] Run: cat /version.json
	I1101 09:44:43.682325  431145 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1101 09:44:43.682357  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:43.682389  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:43.706136  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:43.706904  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:43.805347  431145 ssh_runner.go:195] Run: systemctl --version
	I1101 09:44:43.874184  431145 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1101 09:44:43.911729  431145 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1101 09:44:43.916853  431145 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1101 09:44:43.916969  431145 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1101 09:44:43.925865  431145 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1101 09:44:43.925896  431145 start.go:496] detecting cgroup driver to use...
	I1101 09:44:43.925945  431145 detect.go:190] detected "systemd" cgroup driver on host os
	I1101 09:44:43.925990  431145 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1101 09:44:43.943991  431145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1101 09:44:43.957957  431145 docker.go:218] disabling cri-docker service (if available) ...
	I1101 09:44:43.958025  431145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1101 09:44:43.978662  431145 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1101 09:44:43.998150  431145 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1101 09:44:44.095766  431145 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1101 09:44:44.186660  431145 docker.go:234] disabling docker service ...
	I1101 09:44:44.186734  431145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1101 09:44:44.203509  431145 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1101 09:44:44.219637  431145 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1101 09:44:44.305973  431145 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1101 09:44:44.397941  431145 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1101 09:44:44.412445  431145 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1101 09:44:44.428398  431145 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1101 09:44:44.428453  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.439202  431145 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1101 09:44:44.439274  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.449692  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.459447  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.469010  431145 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1101 09:44:44.478774  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.488588  431145 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.497828  431145 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1101 09:44:44.508106  431145 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1101 09:44:44.516197  431145 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1101 09:44:44.524098  431145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:44.611350  431145 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1101 09:44:44.735128  431145 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1101 09:44:44.735190  431145 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1101 09:44:44.739252  431145 start.go:564] Will wait 60s for crictl version
	I1101 09:44:44.739322  431145 ssh_runner.go:195] Run: which crictl
	I1101 09:44:44.743116  431145 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1101 09:44:44.767513  431145 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1101 09:44:44.767608  431145 ssh_runner.go:195] Run: crio --version
	I1101 09:44:44.796054  431145 ssh_runner.go:195] Run: crio --version
	I1101 09:44:44.827556  431145 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1101 09:44:44.829152  431145 cli_runner.go:164] Run: docker network inspect newest-cni-722387 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1101 09:44:44.847295  431145 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1101 09:44:44.851625  431145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:44:44.864005  431145 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1101 09:44:44.865476  431145 kubeadm.go:884] updating cluster {Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1101 09:44:44.865641  431145 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 09:44:44.865713  431145 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:44:44.899288  431145 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:44:44.899312  431145 crio.go:433] Images already preloaded, skipping extraction
	I1101 09:44:44.899364  431145 ssh_runner.go:195] Run: sudo crictl images --output json
	I1101 09:44:44.925419  431145 crio.go:514] all images are preloaded for cri-o runtime.
	I1101 09:44:44.925444  431145 cache_images.go:86] Images are preloaded, skipping loading
	I1101 09:44:44.925455  431145 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1101 09:44:44.925557  431145 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-722387 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1101 09:44:44.925669  431145 ssh_runner.go:195] Run: crio config
	I1101 09:44:44.976014  431145 cni.go:84] Creating CNI manager for ""
	I1101 09:44:44.976036  431145 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 09:44:44.976055  431145 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1101 09:44:44.976077  431145 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-722387 NodeName:newest-cni-722387 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1101 09:44:44.976198  431145 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-722387"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1101 09:44:44.976263  431145 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1101 09:44:44.984901  431145 binaries.go:44] Found k8s binaries, skipping transfer
	I1101 09:44:44.985010  431145 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1101 09:44:44.993322  431145 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1101 09:44:45.007451  431145 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1101 09:44:45.021715  431145 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1101 09:44:45.034982  431145 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1101 09:44:45.038773  431145 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1101 09:44:45.049453  431145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:45.129091  431145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:44:45.159551  431145 certs.go:69] Setting up /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387 for IP: 192.168.103.2
	I1101 09:44:45.159578  431145 certs.go:195] generating shared ca certs ...
	I1101 09:44:45.159602  431145 certs.go:227] acquiring lock for ca certs: {Name:mkf1e1164b4d43139647fe20f4b19639e232990c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:45.159751  431145 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key
	I1101 09:44:45.159791  431145 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key
	I1101 09:44:45.159800  431145 certs.go:257] generating profile certs ...
	I1101 09:44:45.159878  431145 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/client.key
	I1101 09:44:45.159960  431145 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key.9a1cecae
	I1101 09:44:45.159995  431145 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key
	I1101 09:44:45.160089  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem (1338 bytes)
	W1101 09:44:45.160116  431145 certs.go:480] ignoring /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955_empty.pem, impossibly tiny 0 bytes
	I1101 09:44:45.160126  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca-key.pem (1675 bytes)
	I1101 09:44:45.160146  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/ca.pem (1082 bytes)
	I1101 09:44:45.160169  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/cert.pem (1123 bytes)
	I1101 09:44:45.160191  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/certs/key.pem (1679 bytes)
	I1101 09:44:45.160228  431145 certs.go:484] found cert: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem (1708 bytes)
	I1101 09:44:45.160785  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1101 09:44:45.181812  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1101 09:44:45.202900  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1101 09:44:45.222305  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1101 09:44:45.246674  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1101 09:44:45.267436  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1101 09:44:45.285685  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1101 09:44:45.303459  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/newest-cni-722387/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1101 09:44:45.321214  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1101 09:44:45.339617  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/certs/107955.pem --> /usr/share/ca-certificates/107955.pem (1338 bytes)
	I1101 09:44:45.358270  431145 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/ssl/certs/1079552.pem --> /usr/share/ca-certificates/1079552.pem (1708 bytes)
	I1101 09:44:45.381211  431145 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1101 09:44:45.397793  431145 ssh_runner.go:195] Run: openssl version
	I1101 09:44:45.405043  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1101 09:44:45.414889  431145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:45.419196  431145 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  1 08:55 /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:45.419258  431145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1101 09:44:45.462199  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1101 09:44:45.470808  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107955.pem && ln -fs /usr/share/ca-certificates/107955.pem /etc/ssl/certs/107955.pem"
	I1101 09:44:45.480691  431145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107955.pem
	I1101 09:44:45.485187  431145 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  1 09:02 /usr/share/ca-certificates/107955.pem
	I1101 09:44:45.485249  431145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107955.pem
	I1101 09:44:45.520415  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107955.pem /etc/ssl/certs/51391683.0"
	I1101 09:44:45.529642  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1079552.pem && ln -fs /usr/share/ca-certificates/1079552.pem /etc/ssl/certs/1079552.pem"
	I1101 09:44:45.539052  431145 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1079552.pem
	I1101 09:44:45.543154  431145 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  1 09:02 /usr/share/ca-certificates/1079552.pem
	I1101 09:44:45.543216  431145 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1079552.pem
	I1101 09:44:45.579183  431145 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1079552.pem /etc/ssl/certs/3ec20f2e.0"
	I1101 09:44:45.588088  431145 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1101 09:44:45.592274  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1101 09:44:45.628969  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1101 09:44:45.669055  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1101 09:44:45.715399  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1101 09:44:45.765976  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1101 09:44:45.825051  431145 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1101 09:44:45.873898  431145 kubeadm.go:401] StartCluster: {Name:newest-cni-722387 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-722387 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:44:45.874050  431145 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1101 09:44:45.874116  431145 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1101 09:44:45.906987  431145 cri.go:89] found id: "8c9de05b45c279cd3c244c00a959581d8649bb7f6bf3eb6fa42032a304c39b00"
	I1101 09:44:45.907017  431145 cri.go:89] found id: "0c3e2ddaf2952e68eb50ec0e96d0420fae0487c0267bab0f5fbb97977f8fc6a7"
	I1101 09:44:45.907024  431145 cri.go:89] found id: "5e73866046dcd3375992b040c38f3429a03f7320f9bf75365a3ed14380282331"
	I1101 09:44:45.907029  431145 cri.go:89] found id: "d99ec39de9349dcd9453f38fee56ffbfa79a124b7674dc6b9aab0f30439608df"
	I1101 09:44:45.907032  431145 cri.go:89] found id: ""
	I1101 09:44:45.907082  431145 ssh_runner.go:195] Run: sudo runc list -f json
	W1101 09:44:45.920243  431145 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:44:45Z" level=error msg="open /run/runc: no such file or directory"
	I1101 09:44:45.920312  431145 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1101 09:44:45.928805  431145 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1101 09:44:45.928828  431145 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1101 09:44:45.928887  431145 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1101 09:44:45.937804  431145 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:44:45.938517  431145 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-722387" does not appear in /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:45.938842  431145 kubeconfig.go:62] /home/jenkins/minikube-integration/21833-104443/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-722387" cluster setting kubeconfig missing "newest-cni-722387" context setting]
	I1101 09:44:45.939439  431145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:45.941210  431145 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1101 09:44:45.951281  431145 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1101 09:44:45.951320  431145 kubeadm.go:602] duration metric: took 22.485962ms to restartPrimaryControlPlane
	I1101 09:44:45.951331  431145 kubeadm.go:403] duration metric: took 77.447685ms to StartCluster
	I1101 09:44:45.951352  431145 settings.go:142] acquiring lock: {Name:mk80da1f01e507c68fe7eff188e3dc10a0cd59c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:45.951427  431145 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:44:45.952604  431145 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21833-104443/kubeconfig: {Name:mk7ca86ba03448549b38f525f5b14606f5a93924 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1101 09:44:45.952892  431145 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1101 09:44:45.952992  431145 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1101 09:44:45.953092  431145 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-722387"
	I1101 09:44:45.953112  431145 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-722387"
	I1101 09:44:45.953110  431145 addons.go:70] Setting dashboard=true in profile "newest-cni-722387"
	I1101 09:44:45.953124  431145 addons.go:70] Setting default-storageclass=true in profile "newest-cni-722387"
	I1101 09:44:45.953137  431145 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-722387"
	I1101 09:44:45.953138  431145 addons.go:239] Setting addon dashboard=true in "newest-cni-722387"
	W1101 09:44:45.953149  431145 addons.go:248] addon dashboard should already be in state true
	I1101 09:44:45.953154  431145 config.go:182] Loaded profile config "newest-cni-722387": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:44:45.953181  431145 host.go:66] Checking if "newest-cni-722387" exists ...
	W1101 09:44:45.953119  431145 addons.go:248] addon storage-provisioner should already be in state true
	I1101 09:44:45.953228  431145 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:45.953478  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:45.953621  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:45.953671  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:45.955857  431145 out.go:179] * Verifying Kubernetes components...
	I1101 09:44:45.957450  431145 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1101 09:44:45.980922  431145 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1101 09:44:45.981057  431145 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1101 09:44:45.981080  431145 addons.go:239] Setting addon default-storageclass=true in "newest-cni-722387"
	W1101 09:44:45.981098  431145 addons.go:248] addon default-storageclass should already be in state true
	I1101 09:44:45.981127  431145 host.go:66] Checking if "newest-cni-722387" exists ...
	I1101 09:44:45.981614  431145 cli_runner.go:164] Run: docker container inspect newest-cni-722387 --format={{.State.Status}}
	I1101 09:44:45.982219  431145 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:44:45.982236  431145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1101 09:44:45.982290  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:45.983874  431145 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1101 09:44:45.985175  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1101 09:44:45.985192  431145 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1101 09:44:45.985253  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:46.012192  431145 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1101 09:44:46.012220  431145 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1101 09:44:46.012305  431145 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-722387
	I1101 09:44:46.019255  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:46.024050  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:46.039888  431145 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/newest-cni-722387/id_rsa Username:docker}
	I1101 09:44:46.130982  431145 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1101 09:44:46.149012  431145 api_server.go:52] waiting for apiserver process to appear ...
	I1101 09:44:46.149216  431145 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:44:46.152603  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1101 09:44:46.152632  431145 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1101 09:44:46.162406  431145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1101 09:44:46.170141  431145 api_server.go:72] duration metric: took 217.196033ms to wait for apiserver process to appear ...
	I1101 09:44:46.170173  431145 api_server.go:88] waiting for apiserver healthz status ...
	I1101 09:44:46.170197  431145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:46.174261  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1101 09:44:46.174290  431145 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1101 09:44:46.175301  431145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1101 09:44:46.194605  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1101 09:44:46.194670  431145 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1101 09:44:46.218647  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1101 09:44:46.218681  431145 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1101 09:44:46.237252  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1101 09:44:46.237280  431145 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1101 09:44:46.254336  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1101 09:44:46.254363  431145 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1101 09:44:46.274725  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1101 09:44:46.274752  431145 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1101 09:44:46.290997  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1101 09:44:46.291026  431145 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1101 09:44:46.308649  431145 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:44:46.308678  431145 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1101 09:44:46.327829  431145 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1101 09:44:47.697141  431145 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1101 09:44:47.697190  431145 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1101 09:44:47.697212  431145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:47.702357  431145 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1101 09:44:47.702388  431145 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1101 09:44:48.171067  431145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:48.175139  431145 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:44:48.175170  431145 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:44:48.238952  431145 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.076486284s)
	I1101 09:44:48.238973  431145 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.063640647s)
	I1101 09:44:48.239110  431145 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.911246598s)
	I1101 09:44:48.241342  431145 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-722387 addons enable metrics-server
	
	I1101 09:44:48.250599  431145 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1101 09:44:48.252084  431145 addons.go:515] duration metric: took 2.299090341s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1101 09:44:48.670953  431145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:48.675373  431145 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1101 09:44:48.675404  431145 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1101 09:44:49.171049  431145 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1101 09:44:49.175504  431145 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1101 09:44:49.176558  431145 api_server.go:141] control plane version: v1.34.1
	I1101 09:44:49.176588  431145 api_server.go:131] duration metric: took 3.006407657s to wait for apiserver health ...
	I1101 09:44:49.176603  431145 system_pods.go:43] waiting for kube-system pods to appear ...
	I1101 09:44:49.180291  431145 system_pods.go:59] 8 kube-system pods found
	I1101 09:44:49.180323  431145 system_pods.go:61] "coredns-66bc5c9577-sbh67" [855a1e98-2e65-46b2-b887-ecc758fa3162] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:44:49.180331  431145 system_pods.go:61] "etcd-newest-cni-722387" [db6d9615-3fd5-4642-abb7-9c060c90d98e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1101 09:44:49.180339  431145 system_pods.go:61] "kindnet-vq8r5" [0e3ba1a9-d43e-4944-bd85-a7858465eeb5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1101 09:44:49.180345  431145 system_pods.go:61] "kube-apiserver-newest-cni-722387" [8e6d728a-c7de-4b60-8627-f4e2729f14b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1101 09:44:49.180351  431145 system_pods.go:61] "kube-controller-manager-newest-cni-722387" [a0094ce2-c3fe-4f6f-9f2b-7d9871577296] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1101 09:44:49.180356  431145 system_pods.go:61] "kube-proxy-rxnwv" [b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1101 09:44:49.180362  431145 system_pods.go:61] "kube-scheduler-newest-cni-722387" [8c1c8755-a1ca-4aa2-894c-b7ae1e5f1ab6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1101 09:44:49.180367  431145 system_pods.go:61] "storage-provisioner" [cca90c7a-0f05-4855-ba4d-530a67715840] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1101 09:44:49.180378  431145 system_pods.go:74] duration metric: took 3.764919ms to wait for pod list to return data ...
	I1101 09:44:49.180389  431145 default_sa.go:34] waiting for default service account to be created ...
	I1101 09:44:49.182953  431145 default_sa.go:45] found service account: "default"
	I1101 09:44:49.182973  431145 default_sa.go:55] duration metric: took 2.578627ms for default service account to be created ...
	I1101 09:44:49.182987  431145 kubeadm.go:587] duration metric: took 3.230047702s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1101 09:44:49.183001  431145 node_conditions.go:102] verifying NodePressure condition ...
	I1101 09:44:49.185699  431145 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1101 09:44:49.185726  431145 node_conditions.go:123] node cpu capacity is 8
	I1101 09:44:49.185738  431145 node_conditions.go:105] duration metric: took 2.732658ms to run NodePressure ...
	I1101 09:44:49.185750  431145 start.go:242] waiting for startup goroutines ...
	I1101 09:44:49.185760  431145 start.go:247] waiting for cluster config update ...
	I1101 09:44:49.185774  431145 start.go:256] writing updated cluster config ...
	I1101 09:44:49.186121  431145 ssh_runner.go:195] Run: rm -f paused
	I1101 09:44:49.237563  431145 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1101 09:44:49.239538  431145 out.go:179] * Done! kubectl is now configured to use "newest-cni-722387" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.538642921Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-rxnwv/POD" id=48768ec7-e0f1-4ef1-bc52-9516198d0cc9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.538744761Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.539831879Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.540693533Z" level=info msg="Ran pod sandbox d21c985e76aa6d326ff83ace7f18bf88eea00a0d0b3d8e1600ad69bf0bc63f6b with infra container: kube-system/kindnet-vq8r5/POD" id=b0aa7878-4e25-413b-8a4a-da3819437144 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.54200373Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0475fdeb-1300-4b95-b844-4f1c7902f716 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.542522727Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=48768ec7-e0f1-4ef1-bc52-9516198d0cc9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.54294348Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=931cd5d5-ece0-4f0b-96ce-30ca02520d92 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.544142909Z" level=info msg="Creating container: kube-system/kindnet-vq8r5/kindnet-cni" id=eb5f4184-fa3a-4e03-af22-822ba44527d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.544229469Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.544241685Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.544861438Z" level=info msg="Ran pod sandbox c0a8949892c04a9ea729b4726e2b89418ae56dcfa0ac75f64ee600147b5ab0b2 with infra container: kube-system/kube-proxy-rxnwv/POD" id=48768ec7-e0f1-4ef1-bc52-9516198d0cc9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.545836341Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6f29e1bb-2fe9-4a7f-9145-3917116456b6 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.547863592Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=fd637943-2c5c-4a44-a357-ec0ba5cb8708 name=/runtime.v1.ImageService/ImageStatus
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.548551717Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.549008124Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.549375666Z" level=info msg="Creating container: kube-system/kube-proxy-rxnwv/kube-proxy" id=24b34801-7ebc-4b69-980c-986d1234bc6e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.549477279Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.55387464Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.554564104Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.578675307Z" level=info msg="Created container 6f51ecac5c19ac31954412812478dc448fea1a3d068f79b2c814fe1db5ae5ec4: kube-system/kindnet-vq8r5/kindnet-cni" id=eb5f4184-fa3a-4e03-af22-822ba44527d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.57940374Z" level=info msg="Starting container: 6f51ecac5c19ac31954412812478dc448fea1a3d068f79b2c814fe1db5ae5ec4" id=7a70662f-7799-454a-9816-ef74e22482f3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.581134892Z" level=info msg="Created container f38942d79414deb4d6d4fba9f256d16f028afa0decd341b909a66581363182cd: kube-system/kube-proxy-rxnwv/kube-proxy" id=24b34801-7ebc-4b69-980c-986d1234bc6e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.581400843Z" level=info msg="Started container" PID=1036 containerID=6f51ecac5c19ac31954412812478dc448fea1a3d068f79b2c814fe1db5ae5ec4 description=kube-system/kindnet-vq8r5/kindnet-cni id=7a70662f-7799-454a-9816-ef74e22482f3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d21c985e76aa6d326ff83ace7f18bf88eea00a0d0b3d8e1600ad69bf0bc63f6b
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.581861239Z" level=info msg="Starting container: f38942d79414deb4d6d4fba9f256d16f028afa0decd341b909a66581363182cd" id=74ad86a2-9d2d-490d-9e1b-d04d5f920c33 name=/runtime.v1.RuntimeService/StartContainer
	Nov 01 09:44:48 newest-cni-722387 crio[514]: time="2025-11-01T09:44:48.584625915Z" level=info msg="Started container" PID=1037 containerID=f38942d79414deb4d6d4fba9f256d16f028afa0decd341b909a66581363182cd description=kube-system/kube-proxy-rxnwv/kube-proxy id=74ad86a2-9d2d-490d-9e1b-d04d5f920c33 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0a8949892c04a9ea729b4726e2b89418ae56dcfa0ac75f64ee600147b5ab0b2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f38942d79414d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   c0a8949892c04       kube-proxy-rxnwv                            kube-system
	6f51ecac5c19a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   d21c985e76aa6       kindnet-vq8r5                               kube-system
	8c9de05b45c27       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   3120283fbceaf       kube-controller-manager-newest-cni-722387   kube-system
	0c3e2ddaf2952       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   d78e9e4f2bdd6       kube-scheduler-newest-cni-722387            kube-system
	5e73866046dcd       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   c0711e4e294f5       kube-apiserver-newest-cni-722387            kube-system
	d99ec39de9349       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   c631c357aff3f       etcd-newest-cni-722387                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-722387
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-722387
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22f43620289ade9cffe9cd5d699e7474669a76c7
	                    minikube.k8s.io/name=newest-cni-722387
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_01T09_44_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 01 Nov 2025 09:44:19 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-722387
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 01 Nov 2025 09:44:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 01 Nov 2025 09:44:47 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 01 Nov 2025 09:44:47 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 01 Nov 2025 09:44:47 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 01 Nov 2025 09:44:47 +0000   Sat, 01 Nov 2025 09:44:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-722387
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                ae9053f9-594c-4df9-adeb-a6fd802f163d
	  Boot ID:                    96ec4b11-61d9-423d-a4c1-f7aeb354e961
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-722387                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-vq8r5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-newest-cni-722387             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-newest-cni-722387    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-rxnwv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-newest-cni-722387             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 5s    kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node newest-cni-722387 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node newest-cni-722387 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node newest-cni-722387 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node newest-cni-722387 event: Registered Node newest-cni-722387 in Controller
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-722387 event: Registered Node newest-cni-722387 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +3.477910] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ae 04 9f a0 9b 21 08 06
	[  +0.005887] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[ +14.914762] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 4e 7c 22 5b 8f 08 06
	[  +0.000374] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 22 45 f7 d9 8c 57 08 06
	[  +7.619856] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 7a ef 68 67 b0 08 06
	[  +0.000429] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff de 22 05 b8 cb da 08 06
	[Nov 1 09:40] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e af d7 e1 1d ff 08 06
	[  +0.038807] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[ +19.541525] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff aa 2c 73 70 9f 13 08 06
	[  +0.000331] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 52 4f 28 fa eb e1 08 06
	[Nov 1 09:42] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff aa a5 0d 72 a3 f1 08 06
	[  +0.001148] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ee 48 94 0d b5 6c 08 06
	
	
	==> etcd [d99ec39de9349dcd9453f38fee56ffbfa79a124b7674dc6b9aab0f30439608df] <==
	{"level":"warn","ts":"2025-11-01T09:44:47.000500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.010141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.017373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.026058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.033055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.039995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.046811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.053438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.060965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.082847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.091748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.099607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.107295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.115877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.124057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.135110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.148269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.156101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.164232Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.172618Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.180557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.197354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.204495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.212419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-01T09:44:47.261640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34496","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:44:54 up  1:27,  0 user,  load average: 7.03, 5.77, 3.51
	Linux newest-cni-722387 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6f51ecac5c19ac31954412812478dc448fea1a3d068f79b2c814fe1db5ae5ec4] <==
	I1101 09:44:48.832149       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1101 09:44:48.832456       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1101 09:44:48.832610       1 main.go:148] setting mtu 1500 for CNI 
	I1101 09:44:48.832629       1 main.go:178] kindnetd IP family: "ipv4"
	I1101 09:44:48.832650       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-01T09:44:49Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1101 09:44:49.035223       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1101 09:44:49.035253       1 controller.go:381] "Waiting for informer caches to sync"
	I1101 09:44:49.035265       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1101 09:44:49.035473       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1101 09:44:49.636245       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1101 09:44:49.636280       1 metrics.go:72] Registering metrics
	I1101 09:44:49.636350       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [5e73866046dcd3375992b040c38f3429a03f7320f9bf75365a3ed14380282331] <==
	I1101 09:44:47.778246       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1101 09:44:47.778227       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1101 09:44:47.778366       1 aggregator.go:171] initial CRD sync complete...
	I1101 09:44:47.779133       1 autoregister_controller.go:144] Starting autoregister controller
	I1101 09:44:47.779161       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1101 09:44:47.779191       1 cache.go:39] Caches are synced for autoregister controller
	I1101 09:44:47.778971       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1101 09:44:47.779574       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1101 09:44:47.785833       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1101 09:44:47.797016       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1101 09:44:47.803323       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1101 09:44:47.803339       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1101 09:44:48.044496       1 controller.go:667] quota admission added evaluator for: namespaces
	I1101 09:44:48.079601       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1101 09:44:48.102322       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1101 09:44:48.110662       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1101 09:44:48.117566       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1101 09:44:48.153374       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.29.167"}
	I1101 09:44:48.164038       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.103.174"}
	I1101 09:44:48.682220       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1101 09:44:51.487968       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:44:51.488013       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1101 09:44:51.538334       1 controller.go:667] quota admission added evaluator for: endpoints
	I1101 09:44:51.639325       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1101 09:44:51.639325       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8c9de05b45c279cd3c244c00a959581d8649bb7f6bf3eb6fa42032a304c39b00] <==
	I1101 09:44:51.095600       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1101 09:44:51.097746       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1101 09:44:51.109693       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1101 09:44:51.115103       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1101 09:44:51.117429       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1101 09:44:51.119642       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1101 09:44:51.124378       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1101 09:44:51.126989       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1101 09:44:51.134501       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1101 09:44:51.134532       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1101 09:44:51.134549       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1101 09:44:51.134584       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1101 09:44:51.134671       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1101 09:44:51.134687       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1101 09:44:51.134767       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1101 09:44:51.134706       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1101 09:44:51.134794       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1101 09:44:51.136604       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1101 09:44:51.138704       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1101 09:44:51.139776       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1101 09:44:51.142030       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1101 09:44:51.144268       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1101 09:44:51.148628       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1101 09:44:51.151087       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1101 09:44:51.153667       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f38942d79414deb4d6d4fba9f256d16f028afa0decd341b909a66581363182cd] <==
	I1101 09:44:48.620482       1 server_linux.go:53] "Using iptables proxy"
	I1101 09:44:48.702389       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1101 09:44:48.803385       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1101 09:44:48.803432       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1101 09:44:48.803542       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1101 09:44:48.826738       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1101 09:44:48.826794       1 server_linux.go:132] "Using iptables Proxier"
	I1101 09:44:48.833630       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1101 09:44:48.834096       1 server.go:527] "Version info" version="v1.34.1"
	I1101 09:44:48.834141       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:44:48.835727       1 config.go:403] "Starting serviceCIDR config controller"
	I1101 09:44:48.835757       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1101 09:44:48.835762       1 config.go:200] "Starting service config controller"
	I1101 09:44:48.835780       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1101 09:44:48.835784       1 config.go:106] "Starting endpoint slice config controller"
	I1101 09:44:48.835790       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1101 09:44:48.835804       1 config.go:309] "Starting node config controller"
	I1101 09:44:48.835809       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1101 09:44:48.936700       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1101 09:44:48.936716       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1101 09:44:48.936749       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1101 09:44:48.936770       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0c3e2ddaf2952e68eb50ec0e96d0420fae0487c0267bab0f5fbb97977f8fc6a7] <==
	I1101 09:44:46.119293       1 serving.go:386] Generated self-signed cert in-memory
	I1101 09:44:47.883866       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1101 09:44:47.883936       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1101 09:44:47.889201       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1101 09:44:47.889235       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:44:47.889248       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1101 09:44:47.889261       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1101 09:44:47.889301       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:44:47.889313       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:44:47.889468       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1101 09:44:47.889788       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1101 09:44:47.989548       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1101 09:44:47.989571       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1101 09:44:47.989643       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.734969     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.808149     665 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.808265     665 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.808312     665 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.809655     665 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: E1101 09:44:47.848712     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-722387\" already exists" pod="kube-system/kube-controller-manager-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.848757     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: E1101 09:44:47.856985     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-722387\" already exists" pod="kube-system/kube-scheduler-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.857032     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: E1101 09:44:47.864294     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-722387\" already exists" pod="kube-system/etcd-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: I1101 09:44:47.864336     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-722387"
	Nov 01 09:44:47 newest-cni-722387 kubelet[665]: E1101 09:44:47.874250     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-722387\" already exists" pod="kube-system/kube-apiserver-newest-cni-722387"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.228314     665 apiserver.go:52] "Watching apiserver"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.234831     665 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.277274     665 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-722387"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: E1101 09:44:48.283702     665 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-722387\" already exists" pod="kube-system/kube-apiserver-newest-cni-722387"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.295938     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0-xtables-lock\") pod \"kube-proxy-rxnwv\" (UID: \"b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0\") " pod="kube-system/kube-proxy-rxnwv"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.295976     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-lib-modules\") pod \"kindnet-vq8r5\" (UID: \"0e3ba1a9-d43e-4944-bd85-a7858465eeb5\") " pod="kube-system/kindnet-vq8r5"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.296022     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0-lib-modules\") pod \"kube-proxy-rxnwv\" (UID: \"b51bf1c6-c0c1-4327-bc97-9f81ac83c7f0\") " pod="kube-system/kube-proxy-rxnwv"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.296090     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-cni-cfg\") pod \"kindnet-vq8r5\" (UID: \"0e3ba1a9-d43e-4944-bd85-a7858465eeb5\") " pod="kube-system/kindnet-vq8r5"
	Nov 01 09:44:48 newest-cni-722387 kubelet[665]: I1101 09:44:48.296132     665 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e3ba1a9-d43e-4944-bd85-a7858465eeb5-xtables-lock\") pod \"kindnet-vq8r5\" (UID: \"0e3ba1a9-d43e-4944-bd85-a7858465eeb5\") " pod="kube-system/kindnet-vq8r5"
	Nov 01 09:44:50 newest-cni-722387 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 01 09:44:50 newest-cni-722387 kubelet[665]: I1101 09:44:50.269215     665 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 01 09:44:50 newest-cni-722387 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 01 09:44:50 newest-cni-722387 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-722387 -n newest-cni-722387
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-722387 -n newest-cni-722387: exit status 2 (340.976223ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-722387 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-sbh67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-p2zcd kubernetes-dashboard-855c9754f9-gnmwl
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-722387 describe pod coredns-66bc5c9577-sbh67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-p2zcd kubernetes-dashboard-855c9754f9-gnmwl
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-722387 describe pod coredns-66bc5c9577-sbh67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-p2zcd kubernetes-dashboard-855c9754f9-gnmwl: exit status 1 (63.810037ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-sbh67" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-p2zcd" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-gnmwl" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-722387 describe pod coredns-66bc5c9577-sbh67 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-p2zcd kubernetes-dashboard-855c9754f9-gnmwl: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.24s)

                                                
                                    

Test pass (263/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.8
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 11.66
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.42
21 TestBinaryMirror 0.88
22 TestOffline 55.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 161.37
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 10.44
48 TestAddons/StoppedEnableDisable 18.58
49 TestCertOptions 33.98
50 TestCertExpiration 214.97
52 TestForceSystemdFlag 28.72
53 TestForceSystemdEnv 31.63
58 TestErrorSpam/setup 22.09
59 TestErrorSpam/start 0.72
60 TestErrorSpam/status 1.01
61 TestErrorSpam/pause 6.35
62 TestErrorSpam/unpause 5.32
63 TestErrorSpam/stop 2.58
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.95
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.68
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.48
75 TestFunctional/serial/CacheCmd/cache/add_local 2.35
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.12
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 68.01
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.26
86 TestFunctional/serial/LogsFileCmd 1.28
87 TestFunctional/serial/InvalidService 4.09
89 TestFunctional/parallel/ConfigCmd 0.46
90 TestFunctional/parallel/DashboardCmd 9.69
91 TestFunctional/parallel/DryRun 0.44
92 TestFunctional/parallel/InternationalLanguage 0.25
93 TestFunctional/parallel/StatusCmd 1.01
98 TestFunctional/parallel/AddonsCmd 0.22
99 TestFunctional/parallel/PersistentVolumeClaim 30.29
101 TestFunctional/parallel/SSHCmd 0.6
102 TestFunctional/parallel/CpCmd 1.85
103 TestFunctional/parallel/MySQL 16.72
104 TestFunctional/parallel/FileSync 0.29
105 TestFunctional/parallel/CertSync 1.79
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.67
113 TestFunctional/parallel/License 1.07
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.23
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
121 TestFunctional/parallel/ProfileCmd/profile_list 0.46
122 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
123 TestFunctional/parallel/MountCmd/any-port 9.23
124 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/parallel/Version/short 0.06
131 TestFunctional/parallel/Version/components 0.53
132 TestFunctional/parallel/MountCmd/specific-port 1.63
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
137 TestFunctional/parallel/ImageCommands/ImageBuild 6.1
138 TestFunctional/parallel/ImageCommands/Setup 1.8
139 TestFunctional/parallel/MountCmd/VerifyCleanup 1.64
144 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
147 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
148 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
149 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
150 TestFunctional/parallel/ServiceCmd/List 1.71
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 110.2
163 TestMultiControlPlane/serial/DeployApp 6.56
164 TestMultiControlPlane/serial/PingHostFromPods 1.09
165 TestMultiControlPlane/serial/AddWorkerNode 25.21
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.93
168 TestMultiControlPlane/serial/CopyFile 17.89
169 TestMultiControlPlane/serial/StopSecondaryNode 19.84
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.76
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.4
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.96
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 99.76
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.66
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
176 TestMultiControlPlane/serial/StopCluster 37.29
177 TestMultiControlPlane/serial/RestartCluster 51.4
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
179 TestMultiControlPlane/serial/AddSecondaryNode 41.47
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.94
185 TestJSONOutput/start/Command 37.02
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.14
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 35
211 TestKicCustomNetwork/use_default_bridge_network 25.7
212 TestKicExistingNetwork 27.74
213 TestKicCustomSubnet 25.94
214 TestKicStaticIP 27.79
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 49.59
219 TestMountStart/serial/StartWithMountFirst 6.56
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 8.78
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.28
225 TestMountStart/serial/Stop 1.26
226 TestMountStart/serial/RestartStopped 7.85
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 66.28
231 TestMultiNode/serial/DeployApp2Nodes 4.68
232 TestMultiNode/serial/PingHostFrom2Pods 0.74
233 TestMultiNode/serial/AddNode 23.93
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.68
236 TestMultiNode/serial/CopyFile 10.13
237 TestMultiNode/serial/StopNode 2.33
238 TestMultiNode/serial/StartAfterStop 7.28
239 TestMultiNode/serial/RestartKeepsNodes 55.68
240 TestMultiNode/serial/DeleteNode 5.16
241 TestMultiNode/serial/StopMultiNode 28.63
242 TestMultiNode/serial/RestartMultiNode 51.76
243 TestMultiNode/serial/ValidateNameConflict 23.42
248 TestPreload 111.16
250 TestScheduledStopUnix 97.69
253 TestInsufficientStorage 10.44
254 TestRunningBinaryUpgrade 84.87
256 TestKubernetesUpgrade 306.3
257 TestMissingContainerUpgrade 66.88
258 TestStoppedBinaryUpgrade/Setup 3.22
262 TestStoppedBinaryUpgrade/Upgrade 72.31
267 TestNetworkPlugins/group/false 5.75
272 TestPause/serial/Start 53.4
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.12
282 TestNoKubernetes/serial/StartWithK8s 23.92
283 TestPause/serial/SecondStartNoReconfiguration 7.95
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
286 TestNoKubernetes/serial/StartWithStopK8s 21.32
287 TestNoKubernetes/serial/Start 11.41
288 TestNoKubernetes/serial/VerifyK8sNotRunning 0.4
289 TestNoKubernetes/serial/ProfileList 3.91
290 TestNoKubernetes/serial/Stop 1.32
291 TestNoKubernetes/serial/StartNoArgs 7.64
292 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
293 TestNetworkPlugins/group/auto/Start 42.64
294 TestNetworkPlugins/group/auto/KubeletFlags 0.31
295 TestNetworkPlugins/group/auto/NetCatPod 9.22
296 TestNetworkPlugins/group/auto/DNS 0.12
297 TestNetworkPlugins/group/auto/Localhost 0.09
298 TestNetworkPlugins/group/auto/HairPin 0.1
299 TestNetworkPlugins/group/flannel/Start 47.35
300 TestNetworkPlugins/group/enable-default-cni/Start 39.88
301 TestNetworkPlugins/group/flannel/ControllerPod 6.01
302 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
303 TestNetworkPlugins/group/flannel/NetCatPod 8.2
304 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
305 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
306 TestNetworkPlugins/group/flannel/DNS 0.12
307 TestNetworkPlugins/group/flannel/Localhost 0.11
308 TestNetworkPlugins/group/flannel/HairPin 0.12
309 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
310 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
311 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
312 TestNetworkPlugins/group/bridge/Start 39.97
313 TestNetworkPlugins/group/calico/Start 48.87
314 TestNetworkPlugins/group/kindnet/Start 44.9
315 TestNetworkPlugins/group/bridge/KubeletFlags 0.37
316 TestNetworkPlugins/group/bridge/NetCatPod 9.25
317 TestNetworkPlugins/group/bridge/DNS 0.13
318 TestNetworkPlugins/group/bridge/Localhost 0.09
319 TestNetworkPlugins/group/bridge/HairPin 0.11
320 TestNetworkPlugins/group/calico/ControllerPod 6.01
321 TestNetworkPlugins/group/calico/KubeletFlags 0.33
322 TestNetworkPlugins/group/calico/NetCatPod 12.21
323 TestNetworkPlugins/group/custom-flannel/Start 52.55
324 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
325 TestNetworkPlugins/group/calico/DNS 0.31
326 TestNetworkPlugins/group/calico/Localhost 0.09
327 TestNetworkPlugins/group/calico/HairPin 0.09
328 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
329 TestNetworkPlugins/group/kindnet/NetCatPod 17.2
330 TestNetworkPlugins/group/kindnet/DNS 0.12
331 TestNetworkPlugins/group/kindnet/Localhost 0.11
332 TestNetworkPlugins/group/kindnet/HairPin 0.11
334 TestStartStop/group/old-k8s-version/serial/FirstStart 50.95
336 TestStartStop/group/no-preload/serial/FirstStart 58.83
337 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
338 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.28
339 TestNetworkPlugins/group/custom-flannel/DNS 0.21
340 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
341 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
343 TestStartStop/group/embed-certs/serial/FirstStart 46.16
344 TestStartStop/group/old-k8s-version/serial/DeployApp 9.3
346 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.81
348 TestStartStop/group/old-k8s-version/serial/Stop 16.29
349 TestStartStop/group/no-preload/serial/DeployApp 10.26
350 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
351 TestStartStop/group/old-k8s-version/serial/SecondStart 45.9
353 TestStartStop/group/no-preload/serial/Stop 16.99
354 TestStartStop/group/embed-certs/serial/DeployApp 10.25
355 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
358 TestStartStop/group/embed-certs/serial/Stop 18.17
359 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.53
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
361 TestStartStop/group/no-preload/serial/SecondStart 22.53
362 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
363 TestStartStop/group/embed-certs/serial/SecondStart 45.77
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.25
365 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.82
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 13.01
368 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
369 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
371 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
372 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
375 TestStartStop/group/newest-cni/serial/FirstStart 27.22
376 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
377 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
378 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
379 TestStartStop/group/newest-cni/serial/DeployApp 0
381 TestStartStop/group/newest-cni/serial/Stop 8.08
382 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
384 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
385 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
387 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
388 TestStartStop/group/newest-cni/serial/SecondStart 11.15
389 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
x
+
TestDownloadOnly/v1.28.0/json-events (12.8s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-998424 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-998424 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.80425591s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.80s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1101 08:55:18.167704  107955 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1101 08:55:18.167812  107955 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-998424
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-998424: exit status 85 (75.661965ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-998424 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-998424 │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:55:05
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:55:05.419684  107967 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:55:05.419989  107967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:55:05.420000  107967 out.go:374] Setting ErrFile to fd 2...
	I1101 08:55:05.420007  107967 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:55:05.420239  107967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	W1101 08:55:05.420390  107967 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21833-104443/.minikube/config/config.json: open /home/jenkins/minikube-integration/21833-104443/.minikube/config/config.json: no such file or directory
	I1101 08:55:05.420890  107967 out.go:368] Setting JSON to true
	I1101 08:55:05.421807  107967 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2243,"bootTime":1761985062,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:55:05.421905  107967 start.go:143] virtualization: kvm guest
	I1101 08:55:05.424219  107967 out.go:99] [download-only-998424] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:55:05.424402  107967 notify.go:221] Checking for updates...
	W1101 08:55:05.424411  107967 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball: no such file or directory
	I1101 08:55:05.425484  107967 out.go:171] MINIKUBE_LOCATION=21833
	I1101 08:55:05.427055  107967 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:55:05.428148  107967 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 08:55:05.429415  107967 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 08:55:05.430733  107967 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 08:55:05.432804  107967 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 08:55:05.433200  107967 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:55:05.457602  107967 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 08:55:05.457702  107967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:55:05.522015  107967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-01 08:55:05.508889 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[
Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:55:05.522119  107967 docker.go:319] overlay module found
	I1101 08:55:05.523612  107967 out.go:99] Using the docker driver based on user configuration
	I1101 08:55:05.523643  107967 start.go:309] selected driver: docker
	I1101 08:55:05.523649  107967 start.go:930] validating driver "docker" against <nil>
	I1101 08:55:05.523754  107967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:55:05.584401  107967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-01 08:55:05.573628991 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:55:05.584583  107967 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:55:05.585128  107967 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1101 08:55:05.585323  107967 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 08:55:05.587053  107967 out.go:171] Using Docker driver with root privileges
	I1101 08:55:05.589090  107967 cni.go:84] Creating CNI manager for ""
	I1101 08:55:05.589192  107967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:55:05.589207  107967 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 08:55:05.589305  107967 start.go:353] cluster config:
	{Name:download-only-998424 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-998424 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:55:05.590775  107967 out.go:99] Starting "download-only-998424" primary control-plane node in "download-only-998424" cluster
	I1101 08:55:05.590803  107967 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 08:55:05.591951  107967 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 08:55:05.591985  107967 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 08:55:05.592095  107967 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 08:55:05.610194  107967 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:55:05.610394  107967 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 08:55:05.610518  107967 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:55:06.425733  107967 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 08:55:06.425781  107967 cache.go:59] Caching tarball of preloaded images
	I1101 08:55:06.425962  107967 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1101 08:55:06.428066  107967 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1101 08:55:06.428096  107967 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 08:55:06.530126  107967 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1101 08:55:06.530269  107967 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1101 08:55:10.628222  107967 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	
	
	* The control-plane node download-only-998424 host does not exist
	  To start a cluster, run: "minikube start -p download-only-998424"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-998424
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.66s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-701138 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-701138 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.654782267s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.66s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1101 08:55:30.281940  107955 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1101 08:55:30.281997  107955 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-701138
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-701138: exit status 85 (75.188449ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-998424 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-998424 │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:55 UTC │
	│ delete  │ -p download-only-998424                                                                                                                                                   │ download-only-998424 │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │ 01 Nov 25 08:55 UTC │
	│ start   │ -o=json --download-only -p download-only-701138 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-701138 │ jenkins │ v1.37.0 │ 01 Nov 25 08:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/01 08:55:18
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1101 08:55:18.682710  108348 out.go:360] Setting OutFile to fd 1 ...
	I1101 08:55:18.682968  108348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:55:18.682976  108348 out.go:374] Setting ErrFile to fd 2...
	I1101 08:55:18.682980  108348 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 08:55:18.683189  108348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 08:55:18.683635  108348 out.go:368] Setting JSON to true
	I1101 08:55:18.684464  108348 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2257,"bootTime":1761985062,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 08:55:18.684556  108348 start.go:143] virtualization: kvm guest
	I1101 08:55:18.686307  108348 out.go:99] [download-only-701138] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 08:55:18.686497  108348 notify.go:221] Checking for updates...
	I1101 08:55:18.687758  108348 out.go:171] MINIKUBE_LOCATION=21833
	I1101 08:55:18.689475  108348 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 08:55:18.691889  108348 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 08:55:18.693140  108348 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 08:55:18.694300  108348 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1101 08:55:18.696440  108348 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1101 08:55:18.696673  108348 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 08:55:18.720466  108348 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 08:55:18.720623  108348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:55:18.781440  108348 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-01 08:55:18.771078036 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:55:18.781553  108348 docker.go:319] overlay module found
	I1101 08:55:18.783317  108348 out.go:99] Using the docker driver based on user configuration
	I1101 08:55:18.783349  108348 start.go:309] selected driver: docker
	I1101 08:55:18.783357  108348 start.go:930] validating driver "docker" against <nil>
	I1101 08:55:18.783439  108348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 08:55:18.841854  108348 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-01 08:55:18.83178791 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 08:55:18.842022  108348 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1101 08:55:18.842462  108348 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1101 08:55:18.842610  108348 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1101 08:55:18.844328  108348 out.go:171] Using Docker driver with root privileges
	I1101 08:55:18.845437  108348 cni.go:84] Creating CNI manager for ""
	I1101 08:55:18.845496  108348 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1101 08:55:18.845509  108348 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1101 08:55:18.845590  108348 start.go:353] cluster config:
	{Name:download-only-701138 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-701138 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 08:55:18.846775  108348 out.go:99] Starting "download-only-701138" primary control-plane node in "download-only-701138" cluster
	I1101 08:55:18.846796  108348 cache.go:124] Beginning downloading kic base image for docker with crio
	I1101 08:55:18.848383  108348 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1101 08:55:18.848413  108348 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:55:18.848534  108348 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1101 08:55:18.868738  108348 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1101 08:55:18.868851  108348 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1101 08:55:18.868869  108348 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1101 08:55:18.868873  108348 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1101 08:55:18.868884  108348 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1101 08:55:19.689689  108348 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1101 08:55:19.689725  108348 cache.go:59] Caching tarball of preloaded images
	I1101 08:55:19.689884  108348 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1101 08:55:19.691772  108348 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1101 08:55:19.691800  108348 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1101 08:55:19.787096  108348 preload.go:290] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1101 08:55:19.787143  108348 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21833-104443/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-701138 host does not exist
	  To start a cluster, run: "minikube start -p download-only-701138"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-701138
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-005556 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-005556" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-005556
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.88s)

                                                
                                                
=== RUN   TestBinaryMirror
I1101 08:55:31.545920  107955 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-394939 --alsologtostderr --binary-mirror http://127.0.0.1:41257 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-394939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-394939
--- PASS: TestBinaryMirror (0.88s)

                                                
                                    
x
+
TestOffline (55.6s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-203516 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-203516 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (52.748145745s)
helpers_test.go:175: Cleaning up "offline-crio-203516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-203516
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-203516: (2.851667485s)
--- PASS: TestOffline (55.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-993117
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-993117: exit status 85 (64.842293ms)

                                                
                                                
-- stdout --
	* Profile "addons-993117" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-993117"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-993117
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-993117: exit status 85 (65.272036ms)

                                                
                                                
-- stdout --
	* Profile "addons-993117" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-993117"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (161.37s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-993117 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-993117 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m41.370061401s)
--- PASS: TestAddons/Setup (161.37s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-993117 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-993117 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-993117 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-993117 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a981116f-e99d-4594-8675-f889dd0ec9e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a981116f-e99d-4594-8675-f889dd0ec9e5] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00306155s
addons_test.go:694: (dbg) Run:  kubectl --context addons-993117 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-993117 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-993117 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.58s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-993117
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-993117: (18.271109337s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-993117
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-993117
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-993117
--- PASS: TestAddons/StoppedEnableDisable (18.58s)

                                                
                                    
x
+
TestCertOptions (33.98s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-849997 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-849997 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.574184005s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-849997 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-849997 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-849997 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-849997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-849997
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-849997: (2.586873857s)
--- PASS: TestCertOptions (33.98s)

                                                
                                    
x
+
TestCertExpiration (214.97s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-521698 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-521698 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (25.320418129s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-521698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-521698 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.917787646s)
helpers_test.go:175: Cleaning up "cert-expiration-521698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-521698
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-521698: (2.73546097s)
--- PASS: TestCertExpiration (214.97s)

                                                
                                    
x
+
TestForceSystemdFlag (28.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-281143 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-281143 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.676859518s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-281143 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-281143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-281143
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-281143: (2.670233214s)
--- PASS: TestForceSystemdFlag (28.72s)

                                                
                                    
x
+
TestForceSystemdEnv (31.63s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-773086 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-773086 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.935386219s)
helpers_test.go:175: Cleaning up "force-systemd-env-773086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-773086
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-773086: (2.698256305s)
--- PASS: TestForceSystemdEnv (31.63s)

                                                
                                    
x
+
TestErrorSpam/setup (22.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-221404 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-221404 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-221404 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-221404 --driver=docker  --container-runtime=crio: (22.085307982s)
--- PASS: TestErrorSpam/setup (22.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (1.01s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 status
--- PASS: TestErrorSpam/status (1.01s)

                                                
                                    
x
+
TestErrorSpam/pause (6.35s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 pause: exit status 80 (2.396182377s)

                                                
                                                
-- stdout --
	* Pausing node nospam-221404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:01:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 pause: exit status 80 (1.500444167s)

                                                
                                                
-- stdout --
	* Pausing node nospam-221404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:02:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 pause: exit status 80 (2.457026689s)

                                                
                                                
-- stdout --
	* Pausing node nospam-221404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:02:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.35s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.32s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 unpause: exit status 80 (1.697759975s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-221404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:02:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 unpause: exit status 80 (1.652707018s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-221404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:02:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 unpause: exit status 80 (1.968320367s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-221404 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-01T09:02:08Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.32s)

                                                
                                    
x
+
TestErrorSpam/stop (2.58s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 stop: (2.358140255s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-221404 --log_dir /tmp/nospam-221404 stop
--- PASS: TestErrorSpam/stop (2.58s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21833-104443/.minikube/files/etc/test/nested/copy/107955/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.95s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224473 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-224473 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.949254944s)
--- PASS: TestFunctional/serial/StartWithProxy (37.95s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.68s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1101 09:02:53.423556  107955 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224473 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-224473 --alsologtostderr -v=8: (6.676261785s)
functional_test.go:678: soft start took 6.677439532s for "functional-224473" cluster.
I1101 09:03:00.100648  107955 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.68s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-224473 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.48s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-224473 cache add registry.k8s.io/pause:3.1: (1.449154057s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-224473 cache add registry.k8s.io/pause:3.3: (1.584005404s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-224473 cache add registry.k8s.io/pause:latest: (1.45052966s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.48s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-224473 /tmp/TestFunctionalserialCacheCmdcacheadd_local731233257/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 cache add minikube-local-cache-test:functional-224473
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-224473 cache add minikube-local-cache-test:functional-224473: (1.976909101s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 cache delete minikube-local-cache-test:functional-224473
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-224473
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (295.841613ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-224473 cache reload: (1.194408112s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 kubectl -- --context functional-224473 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-224473 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (68.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224473 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1101 09:03:14.457542  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:14.464020  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:14.475458  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:14.496975  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:14.538471  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:14.620142  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:14.781724  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:15.103517  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:15.745548  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:17.027179  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:19.590080  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:24.712467  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:34.954195  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:03:55.438596  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-224473 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m8.010194623s)
functional_test.go:776: restart took 1m8.010375438s for "functional-224473" cluster.
I1101 09:04:18.033042  107955 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (68.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-224473 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-224473 logs: (1.258996383s)
--- PASS: TestFunctional/serial/LogsCmd (1.26s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 logs --file /tmp/TestFunctionalserialLogsFileCmd1766882682/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-224473 logs --file /tmp/TestFunctionalserialLogsFileCmd1766882682/001/logs.txt: (1.283994377s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.28s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.09s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-224473 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-224473
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-224473: exit status 115 (364.79751ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31484 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-224473 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.09s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 config get cpus: exit status 14 (85.696352ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 config get cpus: exit status 14 (77.569258ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-224473 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-224473 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 147240: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224473 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-224473 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (176.540387ms)

                                                
                                                
-- stdout --
	* [functional-224473] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:04:35.676840  143024 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:04:35.677151  143024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:04:35.677162  143024 out.go:374] Setting ErrFile to fd 2...
	I1101 09:04:35.677166  143024 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:04:35.677444  143024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:04:35.678004  143024 out.go:368] Setting JSON to false
	I1101 09:04:35.679044  143024 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2814,"bootTime":1761985062,"procs":228,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:04:35.679145  143024 start.go:143] virtualization: kvm guest
	I1101 09:04:35.681262  143024 out.go:179] * [functional-224473] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:04:35.682470  143024 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:04:35.682509  143024 notify.go:221] Checking for updates...
	I1101 09:04:35.684944  143024 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:04:35.686180  143024 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:04:35.687262  143024 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:04:35.688298  143024 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:04:35.689626  143024 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:04:35.691339  143024 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:04:35.691821  143024 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:04:35.718972  143024 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:04:35.719126  143024 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:04:35.781684  143024 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-01 09:04:35.769375202 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:04:35.781790  143024 docker.go:319] overlay module found
	I1101 09:04:35.784388  143024 out.go:179] * Using the docker driver based on existing profile
	I1101 09:04:35.785715  143024 start.go:309] selected driver: docker
	I1101 09:04:35.785735  143024 start.go:930] validating driver "docker" against &{Name:functional-224473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-224473 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:04:35.785890  143024 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:04:35.787752  143024 out.go:203] 
	W1101 09:04:35.789037  143024 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1101 09:04:35.790187  143024 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224473 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-224473 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-224473 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (250.233973ms)

                                                
                                                
-- stdout --
	* [functional-224473] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:04:36.145186  143259 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:04:36.145347  143259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:04:36.145355  143259 out.go:374] Setting ErrFile to fd 2...
	I1101 09:04:36.145361  143259 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:04:36.145828  143259 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:04:36.146460  143259 out.go:368] Setting JSON to false
	I1101 09:04:36.147739  143259 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":2814,"bootTime":1761985062,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:04:36.147874  143259 start.go:143] virtualization: kvm guest
	I1101 09:04:36.149694  143259 out.go:179] * [functional-224473] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1101 09:04:36.152345  143259 notify.go:221] Checking for updates...
	I1101 09:04:36.152379  143259 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:04:36.154754  143259 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:04:36.156755  143259 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:04:36.158729  143259 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:04:36.161238  143259 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:04:36.162300  143259 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:04:36.164051  143259 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:04:36.165186  143259 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:04:36.197665  143259 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:04:36.197856  143259 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:04:36.289985  143259 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-01 09:04:36.274833733 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:04:36.290131  143259 docker.go:319] overlay module found
	I1101 09:04:36.292056  143259 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1101 09:04:36.293202  143259 start.go:309] selected driver: docker
	I1101 09:04:36.293227  143259 start.go:930] validating driver "docker" against &{Name:functional-224473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-224473 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1101 09:04:36.293464  143259 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:04:36.296020  143259 out.go:203] 
	W1101 09:04:36.298384  143259 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1101 09:04:36.299845  143259 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 addons list
E1101 09:04:36.400839  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (30.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [c7811221-b07d-489a-b448-b331ef983ee8] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003187717s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-224473 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-224473 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-224473 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-224473 apply -f testdata/storage-provisioner/pod.yaml
I1101 09:04:31.141095  107955 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [08461983-91fa-48d7-aa91-8fd702007555] Pending
helpers_test.go:352: "sp-pod" [08461983-91fa-48d7-aa91-8fd702007555] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [08461983-91fa-48d7-aa91-8fd702007555] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004502287s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-224473 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-224473 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-224473 apply -f testdata/storage-provisioner/pod.yaml
I1101 09:04:45.940368  107955 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6d57d75d-8acf-48fe-a885-fe520fd8b584] Pending
helpers_test.go:352: "sp-pod" [6d57d75d-8acf-48fe-a885-fe520fd8b584] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6d57d75d-8acf-48fe-a885-fe520fd8b584] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.00346007s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-224473 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (30.29s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh -n functional-224473 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 cp functional-224473:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3810109657/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh -n functional-224473 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh -n functional-224473 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (16.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-224473 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-snjxf" [ab82b124-35e6-4f0e-bad9-53b24458da49] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/11/01 09:04:56 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-snjxf" [ab82b124-35e6-4f0e-bad9-53b24458da49] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.003778525s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-224473 exec mysql-5bb876957f-snjxf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-224473 exec mysql-5bb876957f-snjxf -- mysql -ppassword -e "show databases;": exit status 1 (89.413184ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 09:05:09.279452  107955 retry.go:31] will retry after 793.108333ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-224473 exec mysql-5bb876957f-snjxf -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-224473 exec mysql-5bb876957f-snjxf -- mysql -ppassword -e "show databases;": exit status 1 (110.688473ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1101 09:05:10.183563  107955 retry.go:31] will retry after 1.47793161s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-224473 exec mysql-5bb876957f-snjxf -- mysql -ppassword -e "show databases;"
E1101 09:05:58.322723  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:08:14.457607  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:08:42.165029  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:13:14.457564  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (16.72s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/107955/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "sudo cat /etc/test/nested/copy/107955/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/107955.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "sudo cat /etc/ssl/certs/107955.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/107955.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "sudo cat /usr/share/ca-certificates/107955.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1079552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "sudo cat /etc/ssl/certs/1079552.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1079552.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "sudo cat /usr/share/ca-certificates/1079552.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-224473 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 ssh "sudo systemctl is-active docker": exit status 1 (355.520651ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 ssh "sudo systemctl is-active containerd": exit status 1 (314.566602ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
functional_test.go:2293: (dbg) Done: out/minikube-linux-amd64 license: (1.065160882s)
--- PASS: TestFunctional/parallel/License (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-224473 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-224473 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-224473 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-224473 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 141059: os: process already finished
helpers_test.go:519: unable to terminate pid 140773: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-224473 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-224473 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [85294112-7715-4ae4-9142-c0e93d7d9292] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [85294112-7715-4ae4-9142-c0e93d7d9292] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003775497s
I1101 09:04:35.453248  107955 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "382.992885ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "72.5005ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "434.262596ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "76.074085ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224473 /tmp/TestFunctionalparallelMountCmdany-port4025207283/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761987869007217887" to /tmp/TestFunctionalparallelMountCmdany-port4025207283/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761987869007217887" to /tmp/TestFunctionalparallelMountCmdany-port4025207283/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761987869007217887" to /tmp/TestFunctionalparallelMountCmdany-port4025207283/001/test-1761987869007217887
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (340.309612ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:04:29.347936  107955 retry.go:31] will retry after 640.105726ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  1 09:04 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  1 09:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  1 09:04 test-1761987869007217887
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh cat /mount-9p/test-1761987869007217887
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-224473 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [96148f8d-53dd-4ff8-aa88-50bad4c8fff8] Pending
helpers_test.go:352: "busybox-mount" [96148f8d-53dd-4ff8-aa88-50bad4c8fff8] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [96148f8d-53dd-4ff8-aa88-50bad4c8fff8] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [96148f8d-53dd-4ff8-aa88-50bad4c8fff8] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003732202s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-224473 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224473 /tmp/TestFunctionalparallelMountCmdany-port4025207283/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-224473 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.106.252.107 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-224473 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224473 /tmp/TestFunctionalparallelMountCmdspecific-port4247987442/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (300.391928ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:04:38.537829  107955 retry.go:31] will retry after 275.895522ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224473 /tmp/TestFunctionalparallelMountCmdspecific-port4247987442/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 ssh "sudo umount -f /mount-9p": exit status 1 (283.436598ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-224473 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224473 /tmp/TestFunctionalparallelMountCmdspecific-port4247987442/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224473 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224473 image ls --format short --alsologtostderr:
I1101 09:04:57.578302  147820 out.go:360] Setting OutFile to fd 1 ...
I1101 09:04:57.578567  147820 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:04:57.578571  147820 out.go:374] Setting ErrFile to fd 2...
I1101 09:04:57.578575  147820 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:04:57.578816  147820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
I1101 09:04:57.579689  147820 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:04:57.579860  147820 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:04:57.580434  147820 cli_runner.go:164] Run: docker container inspect functional-224473 --format={{.State.Status}}
I1101 09:04:57.604833  147820 ssh_runner.go:195] Run: systemctl --version
I1101 09:04:57.604898  147820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224473
I1101 09:04:57.629408  147820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/functional-224473/id_rsa Username:docker}
I1101 09:04:57.744935  147820 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224473 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/my-image                      │ functional-224473  │ 27ab6d535819e │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/library/nginx                 │ latest             │ 9d0e6f6199dcb │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224473 image ls --format table --alsologtostderr:
I1101 09:05:04.510323  148809 out.go:360] Setting OutFile to fd 1 ...
I1101 09:05:04.510619  148809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:05:04.510630  148809 out.go:374] Setting ErrFile to fd 2...
I1101 09:05:04.510634  148809 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:05:04.510808  148809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
I1101 09:05:04.511443  148809 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:05:04.511542  148809 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:05:04.511946  148809 cli_runner.go:164] Run: docker container inspect functional-224473 --format={{.State.Status}}
I1101 09:05:04.531643  148809 ssh_runner.go:195] Run: systemctl --version
I1101 09:05:04.531712  148809 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224473
I1101 09:05:04.550162  148809 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/functional-224473/id_rsa Username:docker}
I1101 09:05:04.652098  148809 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224473 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTa
gs":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"27ab6d535819e0846f82ec087f2cb964b5f09e02dd342e29a8d44494715d2093","repoDigests":["localhost/my-image@sha256:3b72a2f70080904f5800f1c97e15b89879da18592e08beb651037cfc35dbc39e"],"repoTags":["localhost/my-image:functional-224473"],"size":"1468744"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"5384482
3"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d7
8bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[
"docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec","repoDigests":["docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ec
d840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"ec60ffb77c6c061f2ffb10473480c175822d9e572ab66108531c209aacc11403","repoDigests":["docker.io/library/f907386d7ae95f08d61834646a033ba5ff19361cb2770668f6acd710e8932edd-tmp@sha256:7a7a3cf4162db6c0777844bd45ff51b797f3b8fd03b5176e0c22a6ae4fcdf4cc"],"repoTags":[],"size":"1466132"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868a
aa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224473 image ls --format json --alsologtostderr:
I1101 09:05:04.271264  148756 out.go:360] Setting OutFile to fd 1 ...
I1101 09:05:04.271601  148756 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:05:04.271612  148756 out.go:374] Setting ErrFile to fd 2...
I1101 09:05:04.271623  148756 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:05:04.271819  148756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
I1101 09:05:04.272529  148756 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:05:04.272645  148756 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:05:04.273040  148756 cli_runner.go:164] Run: docker container inspect functional-224473 --format={{.State.Status}}
I1101 09:05:04.292967  148756 ssh_runner.go:195] Run: systemctl --version
I1101 09:05:04.293023  148756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224473
I1101 09:05:04.312508  148756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/functional-224473/id_rsa Username:docker}
I1101 09:05:04.414039  148756 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224473 image ls --format yaml --alsologtostderr:
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 9d0e6f6199dcb6e045dad103064601d730fcfaf8d1bd357d969fb70bd5b90dec
repoDigests:
- docker.io/library/nginx@sha256:12549785f32b3daca6f1c39e7d756226eeb0e8bb20b9e2d8a03d484160862b58
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224473 image ls --format yaml --alsologtostderr:
I1101 09:04:57.884349  147885 out.go:360] Setting OutFile to fd 1 ...
I1101 09:04:57.884792  147885 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:04:57.884807  147885 out.go:374] Setting ErrFile to fd 2...
I1101 09:04:57.884814  147885 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:04:57.885171  147885 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
I1101 09:04:57.885993  147885 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:04:57.886155  147885 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:04:57.886733  147885 cli_runner.go:164] Run: docker container inspect functional-224473 --format={{.State.Status}}
I1101 09:04:57.914727  147885 ssh_runner.go:195] Run: systemctl --version
I1101 09:04:57.914816  147885 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224473
I1101 09:04:57.941021  147885 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/functional-224473/id_rsa Username:docker}
I1101 09:04:58.059924  147885 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 ssh pgrep buildkitd: exit status 1 (352.818211ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image build -t localhost/my-image:functional-224473 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-224473 image build -t localhost/my-image:functional-224473 testdata/build --alsologtostderr: (5.503221977s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-224473 image build -t localhost/my-image:functional-224473 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ec60ffb77c6
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-224473
--> 27ab6d53581
Successfully tagged localhost/my-image:functional-224473
27ab6d535819e0846f82ec087f2cb964b5f09e02dd342e29a8d44494715d2093
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-224473 image build -t localhost/my-image:functional-224473 testdata/build --alsologtostderr:
I1101 09:04:58.544767  148053 out.go:360] Setting OutFile to fd 1 ...
I1101 09:04:58.545136  148053 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:04:58.545150  148053 out.go:374] Setting ErrFile to fd 2...
I1101 09:04:58.545157  148053 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1101 09:04:58.545641  148053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
I1101 09:04:58.546521  148053 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:04:58.547317  148053 config.go:182] Loaded profile config "functional-224473": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1101 09:04:58.547803  148053 cli_runner.go:164] Run: docker container inspect functional-224473 --format={{.State.Status}}
I1101 09:04:58.572837  148053 ssh_runner.go:195] Run: systemctl --version
I1101 09:04:58.572933  148053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-224473
I1101 09:04:58.597313  148053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/functional-224473/id_rsa Username:docker}
I1101 09:04:58.712210  148053 build_images.go:162] Building image from path: /tmp/build.557434294.tar
I1101 09:04:58.712305  148053 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1101 09:04:58.724083  148053 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.557434294.tar
I1101 09:04:58.729908  148053 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.557434294.tar: stat -c "%s %y" /var/lib/minikube/build/build.557434294.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.557434294.tar': No such file or directory
I1101 09:04:58.729959  148053 ssh_runner.go:362] scp /tmp/build.557434294.tar --> /var/lib/minikube/build/build.557434294.tar (3072 bytes)
I1101 09:04:58.759055  148053 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.557434294
I1101 09:04:58.771279  148053 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.557434294 -xf /var/lib/minikube/build/build.557434294.tar
I1101 09:04:58.783976  148053 crio.go:315] Building image: /var/lib/minikube/build/build.557434294
I1101 09:04:58.784053  148053 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-224473 /var/lib/minikube/build/build.557434294 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1101 09:05:03.942659  148053 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-224473 /var/lib/minikube/build/build.557434294 --cgroup-manager=cgroupfs: (5.158575552s)
I1101 09:05:03.942757  148053 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.557434294
I1101 09:05:03.952047  148053 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.557434294.tar
I1101 09:05:03.961157  148053 build_images.go:218] Built localhost/my-image:functional-224473 from /tmp/build.557434294.tar
I1101 09:05:03.961210  148053 build_images.go:134] succeeded building to: functional-224473
I1101 09:05:03.961217  148053 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.744535041s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-224473
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224473 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2863104881/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224473 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2863104881/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-224473 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2863104881/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-224473 ssh "findmnt -T" /mount1: exit status 1 (375.084406ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1101 09:04:40.245725  107955 retry.go:31] will retry after 307.235084ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-224473 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224473 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2863104881/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224473 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2863104881/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-224473 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2863104881/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image rm kicbase/echo-server:functional-224473 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-224473 service list: (1.713611627s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-224473 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-224473 service list -o json: (1.712675284s)
functional_test.go:1504: Took "1.712782085s" to run "out/minikube-linux-amd64 -p functional-224473 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-224473
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-224473
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-224473
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (110.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-324242 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m49.420558637s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (110.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-324242 kubectl -- rollout status deployment/busybox: (4.610900412s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-477sc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-c5ffq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-n79m9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-477sc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-c5ffq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-n79m9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-477sc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-c5ffq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-n79m9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-477sc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-477sc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-c5ffq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-c5ffq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-n79m9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 kubectl -- exec busybox-7b57f96db7-n79m9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-324242 node add --alsologtostderr -v 5: (24.270102008s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-324242 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp testdata/cp-test.txt ha-324242:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile147302909/001/cp-test_ha-324242.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242:/home/docker/cp-test.txt ha-324242-m02:/home/docker/cp-test_ha-324242_ha-324242-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m02 "sudo cat /home/docker/cp-test_ha-324242_ha-324242-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242:/home/docker/cp-test.txt ha-324242-m03:/home/docker/cp-test_ha-324242_ha-324242-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m03 "sudo cat /home/docker/cp-test_ha-324242_ha-324242-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242:/home/docker/cp-test.txt ha-324242-m04:/home/docker/cp-test_ha-324242_ha-324242-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m04 "sudo cat /home/docker/cp-test_ha-324242_ha-324242-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp testdata/cp-test.txt ha-324242-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile147302909/001/cp-test_ha-324242-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m02:/home/docker/cp-test.txt ha-324242:/home/docker/cp-test_ha-324242-m02_ha-324242.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242 "sudo cat /home/docker/cp-test_ha-324242-m02_ha-324242.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m02:/home/docker/cp-test.txt ha-324242-m03:/home/docker/cp-test_ha-324242-m02_ha-324242-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m03 "sudo cat /home/docker/cp-test_ha-324242-m02_ha-324242-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m02:/home/docker/cp-test.txt ha-324242-m04:/home/docker/cp-test_ha-324242-m02_ha-324242-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m04 "sudo cat /home/docker/cp-test_ha-324242-m02_ha-324242-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp testdata/cp-test.txt ha-324242-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile147302909/001/cp-test_ha-324242-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m03:/home/docker/cp-test.txt ha-324242:/home/docker/cp-test_ha-324242-m03_ha-324242.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242 "sudo cat /home/docker/cp-test_ha-324242-m03_ha-324242.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m03:/home/docker/cp-test.txt ha-324242-m02:/home/docker/cp-test_ha-324242-m03_ha-324242-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m02 "sudo cat /home/docker/cp-test_ha-324242-m03_ha-324242-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m03:/home/docker/cp-test.txt ha-324242-m04:/home/docker/cp-test_ha-324242-m03_ha-324242-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m04 "sudo cat /home/docker/cp-test_ha-324242-m03_ha-324242-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp testdata/cp-test.txt ha-324242-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile147302909/001/cp-test_ha-324242-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m04:/home/docker/cp-test.txt ha-324242:/home/docker/cp-test_ha-324242-m04_ha-324242.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242 "sudo cat /home/docker/cp-test_ha-324242-m04_ha-324242.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m04:/home/docker/cp-test.txt ha-324242-m02:/home/docker/cp-test_ha-324242-m04_ha-324242-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m02 "sudo cat /home/docker/cp-test_ha-324242-m04_ha-324242-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 cp ha-324242-m04:/home/docker/cp-test.txt ha-324242-m03:/home/docker/cp-test_ha-324242-m04_ha-324242-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 ssh -n ha-324242-m03 "sudo cat /home/docker/cp-test_ha-324242-m04_ha-324242-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-324242 node stop m02 --alsologtostderr -v 5: (19.102209422s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-324242 status --alsologtostderr -v 5: exit status 7 (736.281602ms)

                                                
                                                
-- stdout --
	ha-324242
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-324242-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-324242-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-324242-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:17:48.962842  172879 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:17:48.963149  172879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:17:48.963160  172879 out.go:374] Setting ErrFile to fd 2...
	I1101 09:17:48.963164  172879 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:17:48.963389  172879 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:17:48.963581  172879 out.go:368] Setting JSON to false
	I1101 09:17:48.963606  172879 mustload.go:66] Loading cluster: ha-324242
	I1101 09:17:48.963787  172879 notify.go:221] Checking for updates...
	I1101 09:17:48.964021  172879 config.go:182] Loaded profile config "ha-324242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:17:48.964041  172879 status.go:174] checking status of ha-324242 ...
	I1101 09:17:48.964567  172879 cli_runner.go:164] Run: docker container inspect ha-324242 --format={{.State.Status}}
	I1101 09:17:48.984625  172879 status.go:371] ha-324242 host status = "Running" (err=<nil>)
	I1101 09:17:48.984678  172879 host.go:66] Checking if "ha-324242" exists ...
	I1101 09:17:48.985002  172879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-324242
	I1101 09:17:49.004415  172879 host.go:66] Checking if "ha-324242" exists ...
	I1101 09:17:49.004740  172879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:17:49.004795  172879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-324242
	I1101 09:17:49.024211  172879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/ha-324242/id_rsa Username:docker}
	I1101 09:17:49.124216  172879 ssh_runner.go:195] Run: systemctl --version
	I1101 09:17:49.130818  172879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:17:49.145187  172879 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:17:49.208402  172879 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-01 09:17:49.199014245 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:17:49.208888  172879 kubeconfig.go:125] found "ha-324242" server: "https://192.168.49.254:8443"
	I1101 09:17:49.208937  172879 api_server.go:166] Checking apiserver status ...
	I1101 09:17:49.208978  172879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:17:49.220625  172879 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1233/cgroup
	W1101 09:17:49.229544  172879 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1233/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:17:49.229594  172879 ssh_runner.go:195] Run: ls
	I1101 09:17:49.233620  172879 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 09:17:49.237824  172879 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 09:17:49.237855  172879 status.go:463] ha-324242 apiserver status = Running (err=<nil>)
	I1101 09:17:49.237868  172879 status.go:176] ha-324242 status: &{Name:ha-324242 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:17:49.237979  172879 status.go:174] checking status of ha-324242-m02 ...
	I1101 09:17:49.238285  172879 cli_runner.go:164] Run: docker container inspect ha-324242-m02 --format={{.State.Status}}
	I1101 09:17:49.257059  172879 status.go:371] ha-324242-m02 host status = "Stopped" (err=<nil>)
	I1101 09:17:49.257086  172879 status.go:384] host is not running, skipping remaining checks
	I1101 09:17:49.257094  172879 status.go:176] ha-324242-m02 status: &{Name:ha-324242-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:17:49.257120  172879 status.go:174] checking status of ha-324242-m03 ...
	I1101 09:17:49.257394  172879 cli_runner.go:164] Run: docker container inspect ha-324242-m03 --format={{.State.Status}}
	I1101 09:17:49.278084  172879 status.go:371] ha-324242-m03 host status = "Running" (err=<nil>)
	I1101 09:17:49.278121  172879 host.go:66] Checking if "ha-324242-m03" exists ...
	I1101 09:17:49.278481  172879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-324242-m03
	I1101 09:17:49.299471  172879 host.go:66] Checking if "ha-324242-m03" exists ...
	I1101 09:17:49.299836  172879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:17:49.299889  172879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-324242-m03
	I1101 09:17:49.318493  172879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/ha-324242-m03/id_rsa Username:docker}
	I1101 09:17:49.417249  172879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:17:49.430710  172879 kubeconfig.go:125] found "ha-324242" server: "https://192.168.49.254:8443"
	I1101 09:17:49.430745  172879 api_server.go:166] Checking apiserver status ...
	I1101 09:17:49.430784  172879 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:17:49.441996  172879 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W1101 09:17:49.450903  172879 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:17:49.450988  172879 ssh_runner.go:195] Run: ls
	I1101 09:17:49.454719  172879 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1101 09:17:49.459525  172879 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1101 09:17:49.459560  172879 status.go:463] ha-324242-m03 apiserver status = Running (err=<nil>)
	I1101 09:17:49.459574  172879 status.go:176] ha-324242-m03 status: &{Name:ha-324242-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:17:49.459599  172879 status.go:174] checking status of ha-324242-m04 ...
	I1101 09:17:49.459933  172879 cli_runner.go:164] Run: docker container inspect ha-324242-m04 --format={{.State.Status}}
	I1101 09:17:49.478899  172879 status.go:371] ha-324242-m04 host status = "Running" (err=<nil>)
	I1101 09:17:49.478952  172879 host.go:66] Checking if "ha-324242-m04" exists ...
	I1101 09:17:49.479199  172879 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-324242-m04
	I1101 09:17:49.497837  172879 host.go:66] Checking if "ha-324242-m04" exists ...
	I1101 09:17:49.498231  172879 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:17:49.498285  172879 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-324242-m04
	I1101 09:17:49.516779  172879 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/ha-324242-m04/id_rsa Username:docker}
	I1101 09:17:49.615373  172879 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:17:49.628157  172879 status.go:176] ha-324242-m04 status: &{Name:ha-324242-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-324242 node start m02 --alsologtostderr -v 5: (13.382225887s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 stop --alsologtostderr -v 5
E1101 09:18:14.461729  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-324242 stop --alsologtostderr -v 5: (39.919643897s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 start --wait true --alsologtostderr -v 5
E1101 09:19:24.741995  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:24.748398  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:24.759805  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:24.781306  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:24.822765  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:24.904296  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:25.065823  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:25.387752  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:26.029853  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:27.311481  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:29.872889  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:34.995188  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:37.527127  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:19:45.237043  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-324242 start --wait true --alsologtostderr -v 5: (59.699654119s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-324242 node delete m03 --alsologtostderr -v 5: (9.790834405s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 stop --alsologtostderr -v 5
E1101 09:20:05.718741  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-324242 stop --alsologtostderr -v 5: (37.163026755s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-324242 status --alsologtostderr -v 5: exit status 7 (123.623787ms)

                                                
                                                
-- stdout --
	ha-324242
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-324242-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-324242-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:20:34.146147  187504 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:20:34.146426  187504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:34.146438  187504 out.go:374] Setting ErrFile to fd 2...
	I1101 09:20:34.146453  187504 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:20:34.146667  187504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:20:34.146868  187504 out.go:368] Setting JSON to false
	I1101 09:20:34.146894  187504 mustload.go:66] Loading cluster: ha-324242
	I1101 09:20:34.147059  187504 notify.go:221] Checking for updates...
	I1101 09:20:34.147333  187504 config.go:182] Loaded profile config "ha-324242": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:20:34.147351  187504 status.go:174] checking status of ha-324242 ...
	I1101 09:20:34.147810  187504 cli_runner.go:164] Run: docker container inspect ha-324242 --format={{.State.Status}}
	I1101 09:20:34.167768  187504 status.go:371] ha-324242 host status = "Stopped" (err=<nil>)
	I1101 09:20:34.167811  187504 status.go:384] host is not running, skipping remaining checks
	I1101 09:20:34.167821  187504 status.go:176] ha-324242 status: &{Name:ha-324242 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:20:34.167864  187504 status.go:174] checking status of ha-324242-m02 ...
	I1101 09:20:34.168191  187504 cli_runner.go:164] Run: docker container inspect ha-324242-m02 --format={{.State.Status}}
	I1101 09:20:34.186868  187504 status.go:371] ha-324242-m02 host status = "Stopped" (err=<nil>)
	I1101 09:20:34.186893  187504 status.go:384] host is not running, skipping remaining checks
	I1101 09:20:34.186901  187504 status.go:176] ha-324242-m02 status: &{Name:ha-324242-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:20:34.186941  187504 status.go:174] checking status of ha-324242-m04 ...
	I1101 09:20:34.187209  187504 cli_runner.go:164] Run: docker container inspect ha-324242-m04 --format={{.State.Status}}
	I1101 09:20:34.206877  187504 status.go:371] ha-324242-m04 host status = "Stopped" (err=<nil>)
	I1101 09:20:34.206945  187504 status.go:384] host is not running, skipping remaining checks
	I1101 09:20:34.206954  187504 status.go:176] ha-324242-m04 status: &{Name:ha-324242-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (51.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1101 09:20:46.680499  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-324242 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (50.53894451s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (51.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-324242 node add --control-plane --alsologtostderr -v 5: (40.527149141s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-324242 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1101 09:22:08.602096  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.02s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-710213 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-710213 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (37.023622145s)
--- PASS: TestJSONOutput/start/Command (37.02s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-710213 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-710213 --output=json --user=testUser: (6.141184244s)
--- PASS: TestJSONOutput/stop/Command (6.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-388593 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-388593 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (83.400686ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9ceeba81-31f6-41c9-b051-9f45f8cf026f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-388593] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4052d08a-e510-4231-980a-1b19a59b2695","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21833"}}
	{"specversion":"1.0","id":"ac70be50-cd3a-4d7d-8cf3-fd10cff6cea8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"99b2d734-de7a-4e54-bc80-6479388a5c0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig"}}
	{"specversion":"1.0","id":"cbf98b45-7d3b-42ec-9a7a-0fe25e80cfa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube"}}
	{"specversion":"1.0","id":"3cb226e7-9093-4427-92e7-f29e70a05680","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"39d19ac8-4e6e-48c3-9ca7-4243fffc8cc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"03e6024c-1c88-4bab-94b4-d48ce4c3bfc0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-388593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-388593
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-659002 --network=
E1101 09:23:14.460510  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-659002 --network=: (32.754056969s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-659002" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-659002
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-659002: (2.226686081s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.00s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-204404 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-204404 --network=bridge: (23.651767011s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-204404" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-204404
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-204404: (2.021942404s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.70s)

                                                
                                    
x
+
TestKicExistingNetwork (27.74s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1101 09:24:10.010972  107955 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1101 09:24:10.028731  107955 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1101 09:24:10.028807  107955 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1101 09:24:10.028829  107955 cli_runner.go:164] Run: docker network inspect existing-network
W1101 09:24:10.046345  107955 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1101 09:24:10.046376  107955 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1101 09:24:10.046394  107955 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1101 09:24:10.046539  107955 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1101 09:24:10.065315  107955 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-7d29bf8504a2 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:c2:cd:69:fb:c0:b7} reservation:<nil>}
I1101 09:24:10.065782  107955 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018a9be0}
I1101 09:24:10.065820  107955 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1101 09:24:10.065874  107955 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1101 09:24:10.129037  107955 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-063974 --network=existing-network
E1101 09:24:24.741959  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-063974 --network=existing-network: (25.556634494s)
helpers_test.go:175: Cleaning up "existing-network-063974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-063974
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-063974: (2.032679451s)
I1101 09:24:37.737371  107955 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (27.74s)

                                                
                                    
x
+
TestKicCustomSubnet (25.94s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-659224 --subnet=192.168.60.0/24
E1101 09:24:52.444527  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-659224 --subnet=192.168.60.0/24: (23.712164437s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-659224 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-659224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-659224
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-659224: (2.207148886s)
--- PASS: TestKicCustomSubnet (25.94s)

                                                
                                    
x
+
TestKicStaticIP (27.79s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-400113 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-400113 --static-ip=192.168.200.200: (25.452726823s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-400113 ip
helpers_test.go:175: Cleaning up "static-ip-400113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-400113
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-400113: (2.178602457s)
--- PASS: TestKicStaticIP (27.79s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.59s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-049803 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-049803 --driver=docker  --container-runtime=crio: (21.716967598s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-052054 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-052054 --driver=docker  --container-runtime=crio: (21.702202618s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-049803
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-052054
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-052054" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-052054
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-052054: (2.410703902s)
helpers_test.go:175: Cleaning up "first-049803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-049803
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-049803: (2.443697767s)
--- PASS: TestMinikubeProfile (49.59s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-101705 --memory=3072 --mount-string /tmp/TestMountStartserial959745934/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-101705 --memory=3072 --mount-string /tmp/TestMountStartserial959745934/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.557801206s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-101705 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-115967 --memory=3072 --mount-string /tmp/TestMountStartserial959745934/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-115967 --memory=3072 --mount-string /tmp/TestMountStartserial959745934/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.782308157s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-115967 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-101705 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-101705 --alsologtostderr -v=5: (1.725330795s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-115967 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-115967
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-115967: (1.260866826s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-115967
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-115967: (6.851206111s)
--- PASS: TestMountStart/serial/RestartStopped (7.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-115967 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-509402 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-509402 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m5.760071082s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.28s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-509402 -- rollout status deployment/busybox: (3.245416281s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- exec busybox-7b57f96db7-87594 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- exec busybox-7b57f96db7-lrqjk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- exec busybox-7b57f96db7-87594 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- exec busybox-7b57f96db7-lrqjk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- exec busybox-7b57f96db7-87594 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- exec busybox-7b57f96db7-lrqjk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- exec busybox-7b57f96db7-87594 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- exec busybox-7b57f96db7-87594 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- exec busybox-7b57f96db7-lrqjk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-509402 -- exec busybox-7b57f96db7-lrqjk -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.74s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-509402 -v=5 --alsologtostderr
E1101 09:28:14.457511  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-509402 -v=5 --alsologtostderr: (23.245673491s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.93s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-509402 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp testdata/cp-test.txt multinode-509402:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp multinode-509402:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3594699278/001/cp-test_multinode-509402.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp multinode-509402:/home/docker/cp-test.txt multinode-509402-m02:/home/docker/cp-test_multinode-509402_multinode-509402-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m02 "sudo cat /home/docker/cp-test_multinode-509402_multinode-509402-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp multinode-509402:/home/docker/cp-test.txt multinode-509402-m03:/home/docker/cp-test_multinode-509402_multinode-509402-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m03 "sudo cat /home/docker/cp-test_multinode-509402_multinode-509402-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp testdata/cp-test.txt multinode-509402-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp multinode-509402-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3594699278/001/cp-test_multinode-509402-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp multinode-509402-m02:/home/docker/cp-test.txt multinode-509402:/home/docker/cp-test_multinode-509402-m02_multinode-509402.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402 "sudo cat /home/docker/cp-test_multinode-509402-m02_multinode-509402.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp multinode-509402-m02:/home/docker/cp-test.txt multinode-509402-m03:/home/docker/cp-test_multinode-509402-m02_multinode-509402-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m03 "sudo cat /home/docker/cp-test_multinode-509402-m02_multinode-509402-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp testdata/cp-test.txt multinode-509402-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp multinode-509402-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3594699278/001/cp-test_multinode-509402-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp multinode-509402-m03:/home/docker/cp-test.txt multinode-509402:/home/docker/cp-test_multinode-509402-m03_multinode-509402.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402 "sudo cat /home/docker/cp-test_multinode-509402-m03_multinode-509402.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 cp multinode-509402-m03:/home/docker/cp-test.txt multinode-509402-m02:/home/docker/cp-test_multinode-509402-m03_multinode-509402-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 ssh -n multinode-509402-m02 "sudo cat /home/docker/cp-test_multinode-509402-m03_multinode-509402-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-509402 node stop m03: (1.276944813s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-509402 status: exit status 7 (526.59676ms)

                                                
                                                
-- stdout --
	multinode-509402
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-509402-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-509402-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-509402 status --alsologtostderr: exit status 7 (525.651907ms)

                                                
                                                
-- stdout --
	multinode-509402
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-509402-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-509402-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:28:38.729436  246946 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:28:38.729738  246946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:38.729750  246946 out.go:374] Setting ErrFile to fd 2...
	I1101 09:28:38.729757  246946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:28:38.729986  246946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:28:38.730207  246946 out.go:368] Setting JSON to false
	I1101 09:28:38.730240  246946 mustload.go:66] Loading cluster: multinode-509402
	I1101 09:28:38.730394  246946 notify.go:221] Checking for updates...
	I1101 09:28:38.730751  246946 config.go:182] Loaded profile config "multinode-509402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:28:38.730776  246946 status.go:174] checking status of multinode-509402 ...
	I1101 09:28:38.731284  246946 cli_runner.go:164] Run: docker container inspect multinode-509402 --format={{.State.Status}}
	I1101 09:28:38.751884  246946 status.go:371] multinode-509402 host status = "Running" (err=<nil>)
	I1101 09:28:38.751929  246946 host.go:66] Checking if "multinode-509402" exists ...
	I1101 09:28:38.752190  246946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-509402
	I1101 09:28:38.770361  246946 host.go:66] Checking if "multinode-509402" exists ...
	I1101 09:28:38.770601  246946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:28:38.770637  246946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-509402
	I1101 09:28:38.789754  246946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/multinode-509402/id_rsa Username:docker}
	I1101 09:28:38.889740  246946 ssh_runner.go:195] Run: systemctl --version
	I1101 09:28:38.896372  246946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:28:38.909587  246946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:28:38.971000  246946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-01 09:28:38.959967005 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:28:38.971485  246946 kubeconfig.go:125] found "multinode-509402" server: "https://192.168.67.2:8443"
	I1101 09:28:38.971514  246946 api_server.go:166] Checking apiserver status ...
	I1101 09:28:38.971556  246946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1101 09:28:38.983284  246946 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup
	W1101 09:28:38.991827  246946 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1248/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1101 09:28:38.991874  246946 ssh_runner.go:195] Run: ls
	I1101 09:28:38.995548  246946 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1101 09:28:38.999619  246946 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1101 09:28:38.999647  246946 status.go:463] multinode-509402 apiserver status = Running (err=<nil>)
	I1101 09:28:38.999660  246946 status.go:176] multinode-509402 status: &{Name:multinode-509402 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:28:38.999681  246946 status.go:174] checking status of multinode-509402-m02 ...
	I1101 09:28:38.999968  246946 cli_runner.go:164] Run: docker container inspect multinode-509402-m02 --format={{.State.Status}}
	I1101 09:28:39.018463  246946 status.go:371] multinode-509402-m02 host status = "Running" (err=<nil>)
	I1101 09:28:39.018504  246946 host.go:66] Checking if "multinode-509402-m02" exists ...
	I1101 09:28:39.018756  246946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-509402-m02
	I1101 09:28:39.038639  246946 host.go:66] Checking if "multinode-509402-m02" exists ...
	I1101 09:28:39.038897  246946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1101 09:28:39.038959  246946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-509402-m02
	I1101 09:28:39.056825  246946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21833-104443/.minikube/machines/multinode-509402-m02/id_rsa Username:docker}
	I1101 09:28:39.155468  246946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1101 09:28:39.168750  246946 status.go:176] multinode-509402-m02 status: &{Name:multinode-509402-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:28:39.168799  246946 status.go:174] checking status of multinode-509402-m03 ...
	I1101 09:28:39.169089  246946 cli_runner.go:164] Run: docker container inspect multinode-509402-m03 --format={{.State.Status}}
	I1101 09:28:39.189205  246946 status.go:371] multinode-509402-m03 host status = "Stopped" (err=<nil>)
	I1101 09:28:39.189230  246946 status.go:384] host is not running, skipping remaining checks
	I1101 09:28:39.189239  246946 status.go:176] multinode-509402-m03 status: &{Name:multinode-509402-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.33s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-509402 node start m03 -v=5 --alsologtostderr: (6.542320726s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (55.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-509402
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-509402
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-509402: (29.654377155s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-509402 --wait=true -v=5 --alsologtostderr
E1101 09:29:24.741639  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-509402 --wait=true -v=5 --alsologtostderr: (25.895508131s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-509402
--- PASS: TestMultiNode/serial/RestartKeepsNodes (55.68s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-509402 node delete m03: (4.510695307s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-509402 stop: (28.418135256s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-509402 status: exit status 7 (104.10842ms)

                                                
                                                
-- stdout --
	multinode-509402
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-509402-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-509402 status --alsologtostderr: exit status 7 (102.84224ms)

                                                
                                                
-- stdout --
	multinode-509402
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-509402-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:30:15.891819  256316 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:30:15.892141  256316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:30:15.892152  256316 out.go:374] Setting ErrFile to fd 2...
	I1101 09:30:15.892156  256316 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:30:15.892337  256316 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:30:15.892496  256316 out.go:368] Setting JSON to false
	I1101 09:30:15.892521  256316 mustload.go:66] Loading cluster: multinode-509402
	I1101 09:30:15.892666  256316 notify.go:221] Checking for updates...
	I1101 09:30:15.893395  256316 config.go:182] Loaded profile config "multinode-509402": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:30:15.893428  256316 status.go:174] checking status of multinode-509402 ...
	I1101 09:30:15.894611  256316 cli_runner.go:164] Run: docker container inspect multinode-509402 --format={{.State.Status}}
	I1101 09:30:15.913924  256316 status.go:371] multinode-509402 host status = "Stopped" (err=<nil>)
	I1101 09:30:15.913965  256316 status.go:384] host is not running, skipping remaining checks
	I1101 09:30:15.913975  256316 status.go:176] multinode-509402 status: &{Name:multinode-509402 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1101 09:30:15.914032  256316 status.go:174] checking status of multinode-509402-m02 ...
	I1101 09:30:15.914406  256316 cli_runner.go:164] Run: docker container inspect multinode-509402-m02 --format={{.State.Status}}
	I1101 09:30:15.932847  256316 status.go:371] multinode-509402-m02 host status = "Stopped" (err=<nil>)
	I1101 09:30:15.932877  256316 status.go:384] host is not running, skipping remaining checks
	I1101 09:30:15.932886  256316 status.go:176] multinode-509402-m02 status: &{Name:multinode-509402-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-509402 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-509402 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.110300565s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-509402 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.76s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-509402
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-509402-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-509402-m02 --driver=docker  --container-runtime=crio: exit status 14 (83.580093ms)

                                                
                                                
-- stdout --
	* [multinode-509402-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-509402-m02' is duplicated with machine name 'multinode-509402-m02' in profile 'multinode-509402'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-509402-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-509402-m03 --driver=docker  --container-runtime=crio: (20.477687822s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-509402
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-509402: exit status 80 (309.367464ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-509402 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-509402-m03 already exists in multinode-509402-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-509402-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-509402-m03: (2.480399086s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.42s)

                                                
                                    
x
+
TestPreload (111.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-496599 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-496599 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.124680094s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-496599 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-496599 image pull gcr.io/k8s-minikube/busybox: (2.625957015s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-496599
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-496599: (5.915895024s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-496599 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1101 09:33:14.457167  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-496599 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (37.762565352s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-496599 image list
helpers_test.go:175: Cleaning up "test-preload-496599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-496599
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-496599: (2.491616539s)
--- PASS: TestPreload (111.16s)

                                                
                                    
x
+
TestScheduledStopUnix (97.69s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-957558 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-957558 --memory=3072 --driver=docker  --container-runtime=crio: (20.650428169s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-957558 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-957558 -n scheduled-stop-957558
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-957558 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1101 09:33:47.682706  107955 retry.go:31] will retry after 84.952µs: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.683926  107955 retry.go:31] will retry after 110.969µs: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.685080  107955 retry.go:31] will retry after 149.663µs: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.686215  107955 retry.go:31] will retry after 438.798µs: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.687348  107955 retry.go:31] will retry after 254.901µs: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.688477  107955 retry.go:31] will retry after 821.048µs: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.689598  107955 retry.go:31] will retry after 598.288µs: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.690721  107955 retry.go:31] will retry after 2.35512ms: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.693935  107955 retry.go:31] will retry after 3.240596ms: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.698158  107955 retry.go:31] will retry after 2.07935ms: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.700294  107955 retry.go:31] will retry after 5.637262ms: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.706508  107955 retry.go:31] will retry after 6.773095ms: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.714009  107955 retry.go:31] will retry after 8.182391ms: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.723336  107955 retry.go:31] will retry after 18.399659ms: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.742582  107955 retry.go:31] will retry after 31.062213ms: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
I1101 09:33:47.773790  107955 retry.go:31] will retry after 31.421906ms: open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/scheduled-stop-957558/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-957558 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-957558 -n scheduled-stop-957558
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-957558
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-957558 --schedule 15s
E1101 09:34:24.742384  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-957558
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-957558: exit status 7 (96.024561ms)

                                                
                                                
-- stdout --
	scheduled-stop-957558
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-957558 -n scheduled-stop-957558
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-957558 -n scheduled-stop-957558: exit status 7 (84.998509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-957558" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-957558
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-957558: (5.446320529s)
--- PASS: TestScheduledStopUnix (97.69s)

                                                
                                    
x
+
TestInsufficientStorage (10.44s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-186251 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-186251 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.87106744s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ade3ada7-d351-4611-a682-0f6543356ba2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-186251] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ba4d4037-7da6-423d-988d-50e1d7d1e882","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21833"}}
	{"specversion":"1.0","id":"b86963c3-dc88-41fb-aa1f-e08c9f855456","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ada4e5d7-412f-41ff-887c-068f53328582","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig"}}
	{"specversion":"1.0","id":"fe88e791-e0ec-4919-b2e5-4f4bd081a986","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube"}}
	{"specversion":"1.0","id":"47cf822a-c1b2-4670-8b5d-0a5bf48f9c81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"25256d52-c9f1-42c2-b979-48c51292cf2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"64bcdef2-66d9-44dc-bf5d-7bb085f59ba7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"9212210f-2a14-469f-ba0f-ea9022e70dc5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"20d7233f-d214-4555-a364-d467412ca7e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"def070fb-d0f8-4f22-832a-390727c55f33","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"aec89b58-0b94-4024-b328-e8bfb6b2f6a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-186251\" primary control-plane node in \"insufficient-storage-186251\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed6983b5-0adf-46cf-ae10-104bec81969e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a9614ba7-4183-425b-814c-a0821937516e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"93a15c99-3ba8-4201-8a80-880ce6a6e6be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-186251 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-186251 --output=json --layout=cluster: exit status 7 (316.784203ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-186251","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-186251","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 09:35:12.421151  276627 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-186251" does not appear in /home/jenkins/minikube-integration/21833-104443/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-186251 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-186251 --output=json --layout=cluster: exit status 7 (308.825337ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-186251","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-186251","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1101 09:35:12.731347  276735 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-186251" does not appear in /home/jenkins/minikube-integration/21833-104443/kubeconfig
	E1101 09:35:12.742080  276735 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/insufficient-storage-186251/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-186251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-186251
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-186251: (1.945103711s)
--- PASS: TestInsufficientStorage (10.44s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (84.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2397235574 start -p running-upgrade-256879 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2397235574 start -p running-upgrade-256879 --memory=3072 --vm-driver=docker  --container-runtime=crio: (52.236225554s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-256879 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-256879 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.405996299s)
helpers_test.go:175: Cleaning up "running-upgrade-256879" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-256879
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-256879: (4.576975824s)
--- PASS: TestRunningBinaryUpgrade (84.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (306.3s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-676776 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-676776 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.621134007s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-676776
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-676776: (2.074591223s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-676776 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-676776 status --format={{.Host}}: exit status 7 (101.445839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-676776 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-676776 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.128756553s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-676776 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-676776 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-676776 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (117.148513ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-676776] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-676776
	    minikube start -p kubernetes-upgrade-676776 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6767762 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-676776 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-676776 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-676776 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (8.45761451s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-676776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-676776
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-676776: (2.705945544s)
--- PASS: TestKubernetesUpgrade (306.30s)

                                                
                                    
x
+
TestMissingContainerUpgrade (66.88s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1722216035 start -p missing-upgrade-749376 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1722216035 start -p missing-upgrade-749376 --memory=3072 --driver=docker  --container-runtime=crio: (25.079491624s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-749376
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-749376: (1.780654639s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-749376
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-749376 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1101 09:38:14.457097  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-749376 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.841088586s)
helpers_test.go:175: Cleaning up "missing-upgrade-749376" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-749376
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-749376: (2.531004954s)
--- PASS: TestMissingContainerUpgrade (66.88s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.22s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.22s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (72.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2678431816 start -p stopped-upgrade-228852 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2678431816 start -p stopped-upgrade-228852 --memory=3072 --vm-driver=docker  --container-runtime=crio: (52.20016968s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2678431816 -p stopped-upgrade-228852 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2678431816 -p stopped-upgrade-228852 stop: (3.888740784s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-228852 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1101 09:36:17.528459  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/addons-993117/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-228852 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.221813905s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (72.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-307390 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-307390 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (582.675366ms)

                                                
                                                
-- stdout --
	* [false-307390] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1101 09:35:20.271606  278589 out.go:360] Setting OutFile to fd 1 ...
	I1101 09:35:20.271864  278589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:35:20.271875  278589 out.go:374] Setting ErrFile to fd 2...
	I1101 09:35:20.271879  278589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1101 09:35:20.272155  278589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21833-104443/.minikube/bin
	I1101 09:35:20.272691  278589 out.go:368] Setting JSON to false
	I1101 09:35:20.273659  278589 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4658,"bootTime":1761985062,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1101 09:35:20.273754  278589 start.go:143] virtualization: kvm guest
	I1101 09:35:20.348185  278589 out.go:179] * [false-307390] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1101 09:35:20.368392  278589 notify.go:221] Checking for updates...
	I1101 09:35:20.368420  278589 out.go:179]   - MINIKUBE_LOCATION=21833
	I1101 09:35:20.438673  278589 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1101 09:35:20.478263  278589 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	I1101 09:35:20.479598  278589 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	I1101 09:35:20.481601  278589 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1101 09:35:20.483531  278589 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1101 09:35:20.485803  278589 config.go:182] Loaded profile config "offline-crio-203516": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1101 09:35:20.485973  278589 driver.go:422] Setting default libvirt URI to qemu:///system
	I1101 09:35:20.510350  278589 docker.go:124] docker version: linux-28.5.1:Docker Engine - Community
	I1101 09:35:20.510473  278589 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1101 09:35:20.574926  278589 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-01 09:35:20.562156634 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1101 09:35:20.575054  278589 docker.go:319] overlay module found
	I1101 09:35:20.637107  278589 out.go:179] * Using the docker driver based on user configuration
	I1101 09:35:20.687455  278589 start.go:309] selected driver: docker
	I1101 09:35:20.687520  278589 start.go:930] validating driver "docker" against <nil>
	I1101 09:35:20.687545  278589 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1101 09:35:20.728892  278589 out.go:203] 
	W1101 09:35:20.782377  278589 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1101 09:35:20.785361  278589 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-307390 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-307390" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-307390" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-307390

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-307390"

                                                
                                                
----------------------- debugLogs end: false-307390 [took: 4.972354519s] --------------------------------
helpers_test.go:175: Cleaning up "false-307390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-307390
--- PASS: TestNetworkPlugins/group/false (5.75s)

                                                
                                    
x
+
TestPause/serial/Start (53.4s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-902975 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1101 09:35:47.806692  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-902975 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (53.396641732s)
--- PASS: TestPause/serial/Start (53.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-481344 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-481344 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (124.042294ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-481344] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21833
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21833-104443/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21833-104443/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (23.92s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-481344 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-481344 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.487175855s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-481344 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (23.92s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.95s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-902975 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-902975 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.940759351s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-228852
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-228852: (1.211832861s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (21.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-481344 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-481344 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (18.620929696s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-481344 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-481344 status -o json: exit status 2 (429.263717ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-481344","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-481344
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-481344: (2.272772685s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (21.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-481344 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-481344 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (11.408463888s)
--- PASS: TestNoKubernetes/serial/Start (11.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-481344 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-481344 "sudo systemctl is-active --quiet service kubelet": exit status 1 (397.517823ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.990728728s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-481344
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-481344: (1.318861867s)
--- PASS: TestNoKubernetes/serial/Stop (1.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-481344 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-481344 --driver=docker  --container-runtime=crio: (7.643695752s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-481344 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-481344 "sudo systemctl is-active --quiet service kubelet": exit status 1 (346.813728ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (42.639580685s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-307390 "pgrep -a kubelet"
I1101 09:38:19.903119  107955 config.go:182] Loaded profile config "auto-307390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-307390 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lsxd8" [b67077a8-85ed-4c17-a4f1-e491a78a9a82] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lsxd8" [b67077a8-85ed-4c17-a4f1-e491a78a9a82] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004387999s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-307390 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.350123273s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (39.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (39.882775935s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (39.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vxjdk" [cf554d6d-2ddd-4cea-9fe6-565fdf0b11a3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003922734s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-307390 "pgrep -a kubelet"
I1101 09:39:23.114379  107955 config.go:182] Loaded profile config "flannel-307390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-307390 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bm8pv" [00ebe8bc-fe9a-4761-969a-453092db90b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1101 09:39:24.742254  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-bm8pv" [00ebe8bc-fe9a-4761-969a-453092db90b2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004130746s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-307390 "pgrep -a kubelet"
I1101 09:39:29.749470  107955 config.go:182] Loaded profile config "enable-default-cni-307390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-307390 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-slgtr" [9f48e51f-458b-49a7-aa2a-36b16cd85670] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-slgtr" [9f48e51f-458b-49a7-aa2a-36b16cd85670] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003503224s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-307390 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-307390 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (39.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (39.96858254s)
--- PASS: TestNetworkPlugins/group/bridge/Start (39.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (48.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (48.866540445s)
--- PASS: TestNetworkPlugins/group/calico/Start (48.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (44.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (44.898330339s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (44.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-307390 "pgrep -a kubelet"
I1101 09:40:32.682550  107955 config.go:182] Loaded profile config "bridge-307390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-307390 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-874hf" [c2b2845b-311d-4515-94d5-d9c4205e61b8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-874hf" [c2b2845b-311d-4515-94d5-d9c4205e61b8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003882132s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-307390 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-mqbss" [27bf19ab-7a69-4036-abec-d55ec1fb659a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-mqbss" [27bf19ab-7a69-4036-abec-d55ec1fb659a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004232327s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-307390 "pgrep -a kubelet"
I1101 09:40:55.120765  107955 config.go:182] Loaded profile config "calico-307390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-307390 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-p5mwv" [fd53cf81-6dab-4d37-a46c-f42f41e1a242] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-p5mwv" [fd53cf81-6dab-4d37-a46c-f42f41e1a242] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.007693797s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (52.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-307390 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (52.549182129s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (52.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-lbrlz" [32fc3dc9-2e87-4a50-8c3f-089a23f78e57] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004348926s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-307390 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-307390 "pgrep -a kubelet"
I1101 09:41:11.476030  107955 config.go:182] Loaded profile config "kindnet-307390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (17.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-307390 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ww4jv" [1b9bc8a2-9772-40dc-a1e8-286ffe0cf4ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ww4jv" [1b9bc8a2-9772-40dc-a1e8-286ffe0cf4ee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 17.003650573s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (17.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-307390 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.944355829s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (58.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (58.827898896s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (58.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-307390 "pgrep -a kubelet"
I1101 09:41:55.903403  107955 config.go:182] Loaded profile config "custom-flannel-307390": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-307390 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rjctk" [bd54b091-9b23-4a3c-b725-7c2971e0cbc6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-rjctk" [bd54b091-9b23-4a3c-b725-7c2971e0cbc6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.0064588s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-307390 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-307390 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)
E1101 09:44:16.811413  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:16.817862  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:16.829322  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:16.850857  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:16.892379  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:16.973897  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:17.135738  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:17.458024  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:18.100076  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:19.381673  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (46.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (46.158840922s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (46.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-106430 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [34bda5ed-1800-4728-8a22-d00b1e7edd29] Pending
helpers_test.go:352: "busybox" [34bda5ed-1800-4728-8a22-d00b1e7edd29] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [34bda5ed-1800-4728-8a22-d00b1e7edd29] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.005723407s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-106430 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (39.813761928s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-106430 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-106430 --alsologtostderr -v=3: (16.294665305s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-224845 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9e4e4413-d3b7-4a5f-b088-241e94f310a4] Pending
helpers_test.go:352: "busybox" [9e4e4413-d3b7-4a5f-b088-241e94f310a4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9e4e4413-d3b7-4a5f-b088-241e94f310a4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003942947s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-224845 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-106430 -n old-k8s-version-106430
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-106430 -n old-k8s-version-106430: exit status 7 (93.129509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-106430 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (45.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-106430 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (45.492311464s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-106430 -n old-k8s-version-106430
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (45.90s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.99s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-224845 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-224845 --alsologtostderr -v=3: (16.988372517s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-214580 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b5303634-8aad-428d-8ab1-7ac3875ed855] Pending
helpers_test.go:352: "busybox" [b5303634-8aad-428d-8ab1-7ac3875ed855] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b5303634-8aad-428d-8ab1-7ac3875ed855] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003788977s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-214580 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-927869 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b82218bf-2168-45f8-93dd-1a8f99a46423] Pending
helpers_test.go:352: "busybox" [b82218bf-2168-45f8-93dd-1a8f99a46423] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b82218bf-2168-45f8-93dd-1a8f99a46423] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004537896s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-927869 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-214580 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-214580 --alsologtostderr -v=3: (18.174672s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.53s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-927869 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-927869 --alsologtostderr -v=3: (16.534359878s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-224845 -n no-preload-224845
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-224845 -n no-preload-224845: exit status 7 (90.727394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-224845 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (22.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 09:43:20.111838  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:43:20.118219  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:43:20.129681  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:43:20.151898  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:43:20.193600  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:43:20.275398  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:43:20.437229  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:43:20.759581  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:43:21.401155  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:43:22.683624  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:43:25.245791  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:43:30.367770  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-224845 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (22.064363126s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-224845 -n no-preload-224845
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (22.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-214580 -n embed-certs-214580
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-214580 -n embed-certs-214580: exit status 7 (94.315211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-214580 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (45.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-214580 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.418036002s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-214580 -n embed-certs-214580
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (45.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869: exit status 7 (105.89628ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-927869 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-927869 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.445249437s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-927869 -n default-k8s-diff-port-927869
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xc92m" [79c2ef77-baca-4182-8bd9-a64e4379615f] Running
E1101 09:43:40.610238  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004297662s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fbzt6" [5cc5ae62-ff49-4cb6-8b46-6c99687d75e6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fbzt6" [5cc5ae62-ff49-4cb6-8b46-6c99687d75e6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.006740223s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (13.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-xc92m" [79c2ef77-baca-4182-8bd9-a64e4379615f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00501146s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-106430 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-106430 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fbzt6" [5cc5ae62-ff49-4cb6-8b46-6c99687d75e6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003744667s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-224845 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-224845 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 09:44:01.092027  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/auto-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (27.218669101s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pcx7c" [a1ea5a6d-90cf-47e8-b721-ea8375535952] Running
E1101 09:44:21.945094  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:24.741942  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/functional-224473/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.007312461s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rlr8h" [c9986c15-8b9c-4a12-9e39-60df5c19b4c5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004876386s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-pcx7c" [a1ea5a6d-90cf-47e8-b721-ea8375535952] Running
E1101 09:44:27.066848  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/flannel-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003830903s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-214580 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-722387 --alsologtostderr -v=3
E1101 09:44:30.244998  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:30.567015  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1101 09:44:31.209260  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-722387 --alsologtostderr -v=3: (8.084414612s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-214580 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rlr8h" [c9986c15-8b9c-4a12-9e39-60df5c19b4c5] Running
E1101 09:44:32.490783  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004104425s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-927869 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-927869 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-722387 -n newest-cni-722387
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-722387 -n newest-cni-722387: exit status 7 (94.951394ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-722387 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1101 09:44:40.175366  107955 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21833-104443/.minikube/profiles/enable-default-cni-307390/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-722387 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.796950145s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-722387 -n newest-cni-722387
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-722387 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-307390 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-307390" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-307390" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-307390

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-307390"

                                                
                                                
----------------------- debugLogs end: kubenet-307390 [took: 5.089762877s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-307390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-307390
--- SKIP: TestNetworkPlugins/group/kubenet (5.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-307390 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-307390" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-307390

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-307390" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-307390"

                                                
                                                
----------------------- debugLogs end: cilium-307390 [took: 4.834881647s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-307390" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-307390
--- SKIP: TestNetworkPlugins/group/cilium (5.08s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-309397" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-309397
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard